Notes
Notes
DSA
Data Types in C
Each variable in C has an associated data type. It specifies the type of data that the variable can
store like integer, character, floating, double, etc. Each data type requires different amounts
of memory and has some specific operations which can be performed over it.
The data types in C can be classified as follows:
Primitive data types are the most basic data types that
Primitive Data int, char, float,
are used for representing simple values such as
Types double, void
integers, float, characters, etc.
User Defined The user-defined data types are defined by the user structure,
Data Types himself. union, enum
• Size: 4 bytes
• Format Specifier: %d
Syntax of Integer
We use int keyword to declare the integer variable:
int var_name;
Note: The size of an integer data type is compiler-dependent. We can use sizeof operator to
check the actual size of any data type.
Example of int
// C program to print Integer data types.
#include <stdio.h>
int main()
{
// Integer value with positive data.
int a = 9;
int b = -9;
Output
Integer value with positive data: 9
• Size: 1 byte
• Format Specifier: %c
Syntax of char
The char keyword is used to declare the variable of character type:
char var_name;
Example of char
// C program to print Integer data types.
#include <stdio.h>
int main()
{
char a = 'a';
char c;
a++;
printf("Value of a after increment is: %c\n", a);
// c is assigned ASCII values
// which corresponds to the
// character 'c'
return 0;
}
Output
Value of a: a
Value of a after increment is: b
Value of c: c
Float Data Type
In C programming float data type is used to store floating-point values. Float in C is used to
store decimal and exponential values. It is used to store decimal numbers (numbers with
floating point values) with single precision.
• Range: 1.2E-38 to 3.4E+38
• Size: 4 bytes
• Format Specifier: %f
Syntax of float
The float keyword is used to declare the variable as a floating point:
float var_name;
Example of Float
// C Program to demonstrate use
// of Floating types
#include <stdio.h>
int main()
{
float a = 9.0f;
float b = 2.5f;
// 2x10^-4
float c = 2E-4f;
printf("%f\n", a);
printf("%f\n", b);
printf("%f", c);
return 0;
}
Output
9.000000
2.500000
0.000200
Double Data Type
A Double data type in C is used to store decimal numbers (numbers with floating point values)
with double precision. It is used to define numeric values which hold numbers with decimal
values in C.
The double data type is basically a precision sort of data type that is capable of holding 64 bits
of decimal numbers or floating points. Since double has more precision as compared to that
float then it is much more obvious that it occupies twice the memory occupied by the floating-
point type. It can easily accommodate about 16 to 17 digits after or before a decimal point.
• Range: 1.7E-308 to 1.7E+308
• Size: 8 bytes
• Format Specifier: %lf
Syntax of Double
The variable can be declared as double precision floating point using the double keyword:
double var_name;
Example of Double
// C Program to demonstrate
// use of double data type
#include <stdio.h>
int main()
{
double a = 123123123.00;
double b = 12.293123;
double c = 2312312312.123123;
printf("%lf\n", a);
printf("%lf\n", b);
printf("%lf", c);
return 0;
}
Output
123123123.000000
12.293123
2312312312.123123
Void Data Type
The void data type in C is used to specify that no value is present. It does not provide a result
value to its caller. It has no values and no operations. It is used to represent nothing. Void is
used in multiple ways as function return type, function arguments as void, and pointers to void.
Syntax:
// function return type void
void exit(int check);
// Function without any parameter can accept void.
int print(void);
// memory allocation function which
// returns a pointer to void.
void *malloc (size_t size);
Example of Void
// C program to demonstrate
// use of void pointers
#include <stdio.h>
int main()
{
int val = 30;
Output
30
int main()
{
int size_of_int = sizeof(int);
int size_of_char = sizeof(char);
size_of_float);
printf("The size of double data type : %d",
size_of_double);
return 0;
Output
The size of int data type : 4
The size of char data type : 1
The size of float data type : 4
The size of double data type : 8
Different data types also have different ranges up to which they can store numbers. These
ranges may vary from compiler to compiler. Below is a list of ranges along with the memory
requirement and format specifiers on the 32-bit GCC compiler.
Size Format
Data Type (bytes) Range Specifier
Note: The long, short, signed and unsigned are datatype modifier that can be used with some
primitive data types to change the size or length of the datatype.
An Abstract Data Type (ADT) is a conceptual model that defines a set of operations and behaviors
for a data structure, without specifying how these operations are implemented or how data is
organized in memory. The definition of ADT only mentions what operations are to be performed
but not how these operations will be implemented. It does not specify how data will be organized
in memory and what algorithms will be used for implementing the operations. It is called
“abstract” because it provides an implementation-independent view.
The process of providing only the essentials and hiding the details is known as abstraction.
For example, we use primitive values like int, float, and char with the understanding that these
data types can operate and be performed on without any knowledge of their implementation
details. ADTs operate similarly by defining what operations are possible without detailing their
implementation.
Difference Between ADTs and UDTs
Does not specify how operations are Specifies how to create and
Implementation implemented or how data is organize data types to
Details structured. implement the structure.
Structures, classes,
List ADT, Stack ADT, Queue ADT.
Example enumerations, records.
Examples of ADTs
Now, let’s understand three common ADT’s: List ADT, Stack ADT, and Queue ADT.
1. List ADT
The List ADT (Abstract Data Type) is a sequential collection of elements that supports a set of
operations without specifying the internal implementation. It provides an ordered way to store,
access, and modify data.
Vies of list
The List ADT need to store the required data in the sequence and should have the following
operations:
• get(): Return an element from the list at any given position.
• insert(): Insert an element at any position in the list.
• remove(): Remove the first occurrence of any element from a non-empty list.
View of stack
In Stack ADT, the order of insertion and deletion should be according to the FILO or LIFO Principle.
Elements are inserted and removed from the same end, called the top of the stack. It should also
support the following operations:
• push(): Insert an element at one end of the stack called the top.
• pop(): Remove and return the element at the top of the stack, if it is not empty.
• peek(): Return the element at the top of the stack without removing it, if the stack is not
empty.
• size(): Return the number of elements in the stack.
• isEmpty(): Return true if the stack is empty; otherwise, return false.
• isFull(): Return true if the stack is full; otherwise, return false.
3. Queue ADT
The Queue ADT is a linear data structure that follows the FIFO (First In, First Out) principle. It
allows elements to be inserted at one end (rear) and removed from the other end (front).
View of Queue
The Queue ADT follows a design similar to the Stack ADT, but the order of insertion and deletion
changes to FIFO. Elements are inserted at one end (called the rear) and removed from the other
end (called the front). It should support the following operations:
• enqueue(): Insert an element at the end of the queue.
• dequeue(): Remove and return the first element of the queue, if the queue is not empty.
• peek(): Return the element of the queue without removing it, if the queue is not empty.
• size(): Return the number of elements in the queue.
• Modularity: ADTs can be combined with other ADTs to form larger, more complex data
structures. This allows for greater flexibility and modularity in programming.
Overall, ADTs provide a powerful tool for organizing and manipulating data in a structured and
efficient manner.
Abstract data types (ADTs) have several advantages and disadvantages that should be considered
when deciding to use them in software development. Here are some of the main advantages and
disadvantages of using ADTs:
Advantages and Disadvantages of ADT
Advantage
• Encapsulation: ADTs provide a way to encapsulate data and operations into a single unit,
making it easier to manage and modify the data structure.
• Abstraction: ADTs allow users to work with data structures without having to know the
implementation details, which can simplify programming and reduce errors.
• Data Structure Independence: ADTs can be implemented using different data structures,
which can make it easier to adapt to changing needs and requirements.
• Information Hiding: ADTs can protect the integrity of data by controlling access and
preventing unauthorized modifications.
• Modularity: ADTs can be combined with other ADTs to form more complex data
structures, which can increase flexibility and modularity in programming.
Disadvantages
• Overhead: Implementing ADTs can add overhead in terms of memory and processing,
which can affect performance.
• Complexity: ADTs can be complex to implement, especially for large and complex data
structures.
• Learning Curve: Using ADTs requires knowledge of their implementation and usage,
which can take time and effort to learn.
• Limited Flexibility: Some ADTs may be limited in their functionality or may not be suitable
for all types of data structures.
• Cost: Implementing ADTs may require additional resources and investment, which can
increase the cost of development.
Abstract Data Types (ADTs)- FAQs
What is an Abstract Data Type (ADT)?
An ADT is a model that defines operations and behaviors for a data type without specifying how
these are implemented or how data is stored.
How does an ADT differ from a User-Defined Data Type (UDT)?
ADTs focus on what operations can be performed and their behaviors. UDTs are custom types
created by programmers that can implement ADTs.
Can you give examples of ADTs?
Examples include List ADT, Stack ADT, and Queue ADT.
What are the main features of ADTs?
Key features are abstraction, encapsulation, modularity, and information hiding.
C Structures
In C, a structure is a user-defined data type that can be used to group items of possibly different
types into a single type. The struct keyword is used to define a structure. The items in the structure
are called its member and they can be of any valid data type.
Example:
#include <stdio.h>
// Defining a structure
struct A {
int x;
};
int main() {
// Initializing member
a.x = 11;
printf("%d", a.x);
return 0;
Output
11
Explanation: In this example, a structure A is defined to hold an integer member x. A variable a of
type struct A is created and its member x is initialized to 11 by accessing it using dot operator.
The value of a.x is then printed to the console.
Structures are used when you want to store a collection of different data types, such as integers,
floats, or even other structures under a single name. To understand how structures are
foundational to building complex data structures, the C Programming Course Online with Data
Structures provides practical applications and detailed explanations.
Syntax of Structure
There are two steps of creating a structure in C:
1. Structure Definition
2. Creating Structure Variables
Structure Definition
A structure is defined using the struct keyword followed by the structure name and its members.
It is also called a structure template or structure prototype, and no memory is allocated to the
structure in the declaration.
struct structure_name {
data_type1 member1;
data_type2 member2;
…
};
• structure_name: Name of the structure.
Note: We cannot initialize the arrays or strings using assignment operator after variable
declaration.
Initialization using Initializer List
struct structure_name str = {value1, value2, value3 ….};
In this type of initialization, the values are assigned in sequential order as they are declared in the
structure template.
Initialization using Designated Initializer List
Designated Initialization allows structure members to be initialized in any order. This feature has
been added in the C99 standard.
struct structure_name str = { .member1 = value1, .member2 = value2, .member3 = value3 };
struct Student {
char name[50];
int age;
float grade;
};
int main() {
s2.grade);
return 0;
}
Output
Rahul 20 18.50
Vikas 18 22.00
Definition, Types, Complexity and Examples of Algorithm
An algorithm is a well-defined sequential computational technique that accepts a value or a
collection of values as input and produces the output(s) needed to solve a problem.
Or we can say that an algorithm is said to be accurate if and only if it stops with the proper output
for each input instance.
NEED OF THE ALGORITHMS :
Algorithms are used to solve problems or automate tasks in a systematic and efficient manner.
They are a set of instructions or rules that guide the computer or software in performing a
particular task or solving a problem.
There are several reasons why we use algorithms:
• Efficiency: Algorithms can perform tasks quickly and accurately, making them an essential
tool for tasks that require a lot of calculations or data processing.
• Consistency: Algorithms are repeatable and produce consistent results every time they are
executed. This is important when dealing with large amounts of data or complex
processes.
• Consider a clock. We know the clock is ticking but how does the manufacturer set those
nuts and bolts so that it keeps on moving every 60 seconds, the min hand should move
and every 60 mins, the hour hand should move? So to solve this problem, there must be
an algorithm behind it.
• Seen someone cooking your favorite food for you? Is the recipe necessary for it? Yes, it is
necessary as a recipe is a sequential procedure that turns a raw potato into a chilly potato.
This is what an algorithm is: following a procedure to get the desired output. Is the
sequence necessary to be followed? Yes, the sequence is the most important thing that
has to be followed to get what we want.
Types of Algorithms:
• Sorting algorithms: Bubble Sort, insertion sort, and many more. These algorithms are
used to sort the data in a particular format.
• Searching algorithms: Linear search, binary search, etc. These algorithms are used in
finding a value or record that the user demands.
• Graph Algorithms: It is used to find solutions to problems like finding the shortest path
between cities, and real-life problems like traveling salesman problems.
Sorting algorithms are algorithms that take a collection of elements and rearrange them in a
specified order (e.g. ascending or descending). There are many different sorting algorithms,
each with its own strengths and weaknesses. Some of the most commonly used sorting
algorithms include:
Bubble sort: A simple sorting algorithm that repeatedly steps through the list, compares adjacent
elements and swaps them if they are in the wrong order.
Insertion sort: A simple sorting algorithm that builds up the final sorted array one item at a time,
by comparing each new item to the items that have already been sorted and inserting it in the
correct position.
Selection sort: A simple sorting algorithm that repeatedly selects the minimum element from the
unsorted part of the array and moves it to the end of the sorted part.
Merge sort: A divide-and-conquer sorting algorithm that works by dividing the unsorted list into
n sub-lists, sorting each sub-list, and then merging them back into a single sorted list.
Quick sort: A divide-and-conquer sorting algorithm that works by selecting a “pivot” element
from the array and partitioning the other elements into two sub-arrays, according to whether
they are less than or greater than the pivot. The sub-arrays are then sorted recursively.
Each of these algorithms has different time and space complexities, making some more suitable
for certain use cases than others.
Searching algorithms are algorithms that search for a particular element or value in a data
structure (such as an array or a linked list). Some of the most commonly used searching
algorithms include:
Linear search: A simple searching algorithm that iterates through every element of a list until it
finds a match.
Binary search: A searching algorithm that works by dividing a sorted list in half repeatedly, until
the desired element is found or it can be determined that the element is not present.
Jump search: A searching algorithm that works by jumping ahead by fixed steps in the list, until
a suitable candidate is found, and then performing a linear search in the surrounding elements.
Interpolation search: A searching algorithm that works by using information about the range of
values in the list to estimate the position of the desired element and then verifying that it is indeed
present.
Hash table search: A searching algorithm that uses a hash function to map elements to indices in
an array, and then performs constant-time lookups in the array to find the desired element.
Each of these algorithms has different time and space complexities, making some more suitable
for certain use cases than others. The choice of which algorithm to use depends on the specific
requirements of the problem, such as the size of the data structure, the distribution of values, and
the desired time complexity.
Graph algorithms are a set of algorithms that are used to process, analyze and understand
graph data structures. Graphs are mathematical structures used to model relationships
between objects, where the objects are represented as vertices (or nodes) and the relationships
between them are represented as edges. Graph algorithms are used in a variety of applications
such as network analysis, social network analysis, recommendation systems, and in many other
areas where understanding the relationships between objects is important. Some of the
common graph algorithms include:
Shortest Path algorithms (e.g. Dijkstra’s, Bellman-Ford, A*)
Minimum Spanning Tree algorithms (e.g. Kruskal, Prim)
Maximum Flow algorithms (e.g. Ford-Fulkerson, Edmonds-Karp)
Network Flow algorithms (e.g. Bipartite Matching)
Connectivity algorithms (e.g. Depth-first Search, Breadth-first Search)
Why do we use algorithms?
Consider two kids, Aman and Rohan, solving the Rubik’s Cube. Aman knows how to solve it in a
definite number of steps. On the other hand, Rohan knows that he will do it but is not aware of
the procedure. Aman solves the cube within 2 minutes whereas Rohan is still stuck and by the end
of the day, he somehow managed to solve it (might have cheated as the procedure is necessary).
So the time required to solve with a procedure/algorithm is much more effective than that without
any procedure. Hence the need for an algorithm is a must.
In terms of designing a solution to an IT problem, computers are fast but not infinitely fast. The
memory may be inexpensive but not free. So, computing time is therefore a bounded resource
and so is the space in memory. So we should use these resources wisely and algorithms that are
efficient in terms of time and space will help you do so.
Creating an Algorithm:
Since the algorithm is language-independent, we write the steps to demonstrate the logic behind
the solution to be used for solving a problem. But before writing an algorithm, keep the following
points in mind:
• The algorithm should be clear and unambiguous.
• There should be 0 or more well-defined inputs in an algorithm.
• An algorithm must produce one or more well-defined outputs that are equivalent to the
desired output. After a specific number of steps, algorithms must ground to a halt.
• Algorithms must stop or end after a finite number of steps.
• In an algorithm, step-by-step instructions should be supplied, and they should be
independent of any computer code.
Example: algorithm to multiply 2 numbers and print the result:
Step 1: Start
Step 2: Get the knowledge of input. Here we need 3 variables; a and b will be the user input and
c will hold the result.
Step 3: Declare a, b, c variables.
Step 4: Take input for a and b variable from the user.
Step 5: Know the problem and find the solution using operators, data structures and logic
We need to multiply a and b variables so we use * operator and assign the result to c.
That is c <- a * b
Step 6: Check how to give output, Here we need to print the output. So write print c
Step 7: End
Example 1: Write an algorithm to find the maximum of all the elements present in the array.
Follow the algorithm approach as below:
Step 1: Start the Program
Step 2: Declare a variable max with the value of the first element of the array.
Step 3: Compare max with other elements using loop.
Step 4: If max < array element value, change max to new max.
Step 5: If no element is left, return or print max otherwise goto step 3.
Step 6: End of Solution
Example 2: Write an algorithm to find the average of 3 subjects.
Follow the algorithm approach as below:
Step 1: Start the Program
Step 2: Declare and Read 3 Subject, let’s say S1, S2, S3
Step 3: Calculate the sum of all the 3 Subject values and store result in Sum variable (Sum =
S1+S2+S3)
Step 4: Divide Sum by 3 and assign it to Average variable. (Average = Sum/3)
Step 5: Print the value of Average of 3 Subjects
Step 6: End of Solution
Know about Algorithm Complexity:
Complexity in algorithms refers to the amount of resources (such as time or memory) required to
solve a problem or perform a task. The most common measure of complexity is time complexity,
which refers to the amount of time an algorithm takes to produce a result as a function of the size
of the input. Memory complexity refers to the amount of memory used by an algorithm. Algorithm
designers strive to develop algorithms with the lowest possible time and memory complexities,
since this makes them more efficient and scalable.
The complexity of an algorithm is a function describing the efficiency of the algorithm in terms of
the amount of data the algorithm must process.
Usually there are natural units for the domain and range of this function.
An algorithm is analyzed using Time Complexity and Space Complexity. Writing an efficient
algorithm help to consume the minimum amount of time for processing the logic. For algorithm
A, it is judged on the basis of two parameters for an input of size n :
• Time Complexity: Time taken by the algorithm to solve the problem. It is measured by
calculating the iteration of loops, number of comparisons etc.
• Time complexity is a function describing the amount of time an algorithm takes in terms
of the amount of input to the algorithm.
• “Time” can mean the number of memory accesses performed, the number of comparisons
between integers, the number of times some inner loop is executed, or some other natural
unit related to the amount of real time the algorithm will take.
• Space Complexity: Space taken by the algorithm to solve the problem. It includes space
used by necessary input variables and any extra space (excluding the space taken by
inputs) that is used by the algorithm. For example, if we use a hash table (a kind of data
structure), we need an array to store values so
• this is an extra space occupied, hence will count towards the space complexity of the
algorithm. This extra space is known as Auxiliary Space.
• Space complexity is a function describing the amount of memory(space)an algorithm
takes in terms of the amount of input to the algorithm.
• Space complexity is sometimes ignored because the space used is minimal and/ or obvious,
but sometimes it becomes an issue as time.
.The time complexity of the operations:
• The choice of data structure should be based on the time complexity of the operations that
will be performed.
• Time complexity is defined in terms of how many times it takes to run a given algorithm,
based on the length of the input.
• The time complexity of an algorithm is the amount of time it takes for each statement to
complete. It is highly dependent on the size of the processed data.
• For example, if you need to perform searches frequently, you should use a binary search
tree.
1.Best case complexity: The best-case scenario for an algorithm is the scenario in which the
algorithm performs the minimum amount of work (e.g. takes the shortest amount of time, uses
the least amount of memory, etc.).
2.Worst case complexity: The worst-case scenario for an algorithm is the scenario in which the
algorithm performs the maximum amount of work (e.g. takes the longest amount of time, uses
the most amount of memory, etc.).
In analyzing the complexity of an algorithm, it is often more informative to study the worst-case
scenario, as this gives a guaranteed upper bound on the performance of the algorithm. Best-case
scenario analysis is sometimes performed, but is generally less important as it provides a lower
bound that is often trivial to achieve.
Advantages of Algorithms
• Easy to understand: Since it is a stepwise representation of a solution to a given problem,
it is easy to understand.
• Debug / Error Finding: Every step is independent / in a flow so it will be easy to spot and
fix the error.
• Sub-Problems: It is written in a flow so now the programmer can divide the tasks which
makes them easier to code.
Disadvantages of Algorithms
• Creating efficient algorithms is time-consuming and requires good logical skills.
• Nasty to show branching and looping in algorithms.
It is define as lower bound and lower bound on an algorithm is the least amount of time required
( the most efficient way possible, in other words best case).
Just like Onotation provide an asymptotic upper bound, Ωnotation provides asymptotic lower
bound.
The main idea of asymptotic analysis is to have a measure of the efficiency of algorithms that
don’t depend on machine-specific constants, mainly because this analysis doesn’t require
algorithms to be implemented and time taken by programs to be compared. We have already
discussed Three main asymptotic notations. The following 2 more asymptotic notations are used
to represent the time complexity of algorithms.
Little o Asymptotic Notation:
Big O is used as a tight upper bound on the growth of an algorithm’s effort (this effort is described
by the function f(n)), even though, as written, it can also be a loose upper bound. “Little o” (o)
notation is used to describe an upper bound that cannot be tight.
In the domain of algorithm analysis, the little o notation is a valuable tool used to describe the
behavior of functions as they approach certain limits. When we say that a function f(n) is o(g(n)),
we are essentially stating that f(n) grows slower than g(n) as n approaches infinity. In simpler
terms, if f(n) = o(g(n)), it means that g(n) grows faster than f(n).
Thus, Little o means loose upper-bound of f(n). Little o is a rough estimate of the maximum order
of growth whereas Big-O may be the actual order of growth.
Mathematical Relation:
Mathmatical Relation:
if f(n) ∈ ω(g(n)) then, lim f(n)/g(n) = ∞ n→∞
Example:
Prove that 4n+6 ∈ ο(1);
The little omega(ο) running time can be proven by applying limit formula given below.
• A shuttlecock box (or any other box that is closed from one end) is another great real-
world example of the LIFO (Last In, First Out) principle where do insertions and removals
from the same end.
Representation of Stack Data Structure:
Stack follows LIFO (Last In First Out) Principle so the element which is pushed last is popped first.
Types of Stack:
• Fixed Size Stack : As the name suggests, a fixed size stack has a fixed size and cannot grow
or shrink dynamically. If the stack is full and an attempt is made to add an element to it,
an overflow error occurs. If the stack is empty and an attempt is made to remove an
element from it, an underflow error occurs.
• Dynamic Size Stack : A dynamic size stack can grow or shrink dynamically. When the stack
is full, it automatically increases its size to accommodate the new element, and when the
stack is empty, it decreases its size. This type of stack is implemented using a linked list,
as it allows for easy resizing of the stack.
Basic Operations on Stack:
In order to make manipulations in a stack, there are certain operations provided to us.
• push() to insert an element into the stack
• pop() to remove an element from the stack
Adds an item to the stack. If the stack is full, then it is said to be an Overflow condition.
Algorithm for Push Operation:
• Before pushing the element to the stack, we check if the stack is full .
• If the stack is full (top == capacity-1) , then Stack Overflows and we cannot insert the
element to the stack.
• Otherwise, we increment the value of top by 1 (top = top + 1) and the new value is
inserted at top position .
• The elements can be pushed into the stack till we reach the capacity of the stack.
Implementation of Stack
The basic operations that can be performed on a stack include push, pop, and peek. There are
two ways to implement a stack –
• Implementation of Stack using Array
• Implementation of Stack using Linked List
• Implementation of Stack using Deque
Complexity Analysis of Operations on Stack Data Structure:
Operations on Queue
1. Enqueue:
Enqueue operation adds (or stores) an element to the end of the queue.
Steps:
1. Check if the queue is full. If so, return an overflow error and exit.
2. If the queue is not full, increment the rear pointer to the next available position.
3. Insert the element at the rear.
2. Dequeue:
Dequeue operation removes the element at the front of the queue. The following steps are taken
to perform the dequeue operation:
Types of Queues
Real-life applications of queue include: Cashier Line in Stores, CPU Scheduling, Disk Scheduling,
Serving requests on a shared resource like Printer.
5. What are some limitations of Queue data structure?
Some of the limitations of Queue are: deletion of some middle element is not possible until all the
elements before it are not deleted. Also random access of any element takes linear time
complexity.
6. What are the different ways to implement a Queue?
Queue data structure can be implemented by using Arrays or by using Linked List. The Array
implementation requires the size of the Queue to be specified at the time of declaration.
7. What are some common operations in Queue data structure?
Some common operations on Queue are: Insertion or enqueue, Deletion or dequeue, front, rear,
isFull, isEmpty, etc.