0% found this document useful (0 votes)
2 views

Notes

The document provides an overview of data types in C, categorizing them into primitive, derived, and user-defined types, and detailing their characteristics, sizes, and usage. It also explains abstract data types (ADTs) and their operations, emphasizing the distinction between ADTs and user-defined data types (UDTs). Key examples of data types such as int, char, float, and double are included, along with their syntax and output demonstrations.

Uploaded by

hishamzubair4
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Notes

The document provides an overview of data types in C, categorizing them into primitive, derived, and user-defined types, and detailing their characteristics, sizes, and usage. It also explains abstract data types (ADTs) and their operations, emphasizing the distinction between ADTs and user-defined data types (UDTs). Key examples of data types such as int, char, float, and double are included, along with their syntax and output demonstrations.

Uploaded by

hishamzubair4
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Class Notes

DSA
Data Types in C
Each variable in C has an associated data type. It specifies the type of data that the variable can
store like integer, character, floating, double, etc. Each data type requires different amounts
of memory and has some specific operations which can be performed over it.
The data types in C can be classified as follows:

Types Description Data Types

Primitive data types are the most basic data types that
Primitive Data int, char, float,
are used for representing simple values such as
Types double, void
integers, float, characters, etc.

The data types that are derived from the primitive or


array, pointers,
Derived Types built-in datatypes are referred to as Derived Data
function
Types.

User Defined The user-defined data types are defined by the user structure,
Data Types himself. union, enum

Understanding C’s data types is critical for writing efficient programs.


The following are some main primitive data types in C:
Integer Data Type
The integer datatype in C is used to store the integer numbers (any number including positive,
negative and zero without decimal part). Octal values, hexadecimal values, and decimal values
can be stored in int data type in C.
• Range: -2,147,483,648 to 2,147,483,647

• Size: 4 bytes
• Format Specifier: %d
Syntax of Integer
We use int keyword to declare the integer variable:
int var_name;

The integer data type can also be used as


1. unsigned int: Unsigned int data type in C is used to store the data values from zero to
positive numbers but it can’t store negative values like signed int.
2. short int: It is lesser in size than the int by 2 bytes so can only store values from -32,768
to 32,767.
3. long int: Larger version of the int datatype so can store values greater than int.
4. unsigned short int: Similar in relationship with short int as unsigned int with int.

Note: The size of an integer data type is compiler-dependent. We can use sizeof operator to
check the actual size of any data type.

Example of int
// C program to print Integer data types.
#include <stdio.h>

int main()

{
// Integer value with positive data.
int a = 9;

// integer value with negative data.

int b = -9;

// U or u is Used for Unsigned int in C.


int c = 89U;

// L or l is used for long int in C.


long int d = 99998L;

printf("Integer value with positive data: %d\n", a);


printf("Integer value with negative data: %d\n", b);

printf("Integer value with an unsigned int data: %u\n",


c);
printf("Integer value with an long int data: %ld", d);
return 0;
}

Output
Integer value with positive data: 9

Integer value with negative data: -9


Integer value with an unsigned int data: 89
Integer value with an long int data: 99998
Character Data Type
Character data type allows its variable to store only a single character. The size of the character
is 1 byte. It is the most basic data type in C. It stores a single character and requires a single
byte of memory in almost all compilers.
• Range: (-128 to 127) or (0 to 255)

• Size: 1 byte
• Format Specifier: %c
Syntax of char
The char keyword is used to declare the variable of character type:
char var_name;

Example of char
// C program to print Integer data types.
#include <stdio.h>

int main()

{
char a = 'a';
char c;

printf("Value of a: %c\n", a);

a++;
printf("Value of a after increment is: %c\n", a);
// c is assigned ASCII values
// which corresponds to the
// character 'c'

// a-->97 b-->98 c-->99


// here c will be printed
c = 99;

printf("Value of c: %c", c);

return 0;
}

Output
Value of a: a
Value of a after increment is: b

Value of c: c
Float Data Type
In C programming float data type is used to store floating-point values. Float in C is used to
store decimal and exponential values. It is used to store decimal numbers (numbers with
floating point values) with single precision.
• Range: 1.2E-38 to 3.4E+38
• Size: 4 bytes
• Format Specifier: %f

Syntax of float
The float keyword is used to declare the variable as a floating point:
float var_name;
Example of Float
// C Program to demonstrate use

// of Floating types
#include <stdio.h>

int main()
{
float a = 9.0f;
float b = 2.5f;

// 2x10^-4
float c = 2E-4f;
printf("%f\n", a);
printf("%f\n", b);

printf("%f", c);

return 0;
}

Output
9.000000

2.500000
0.000200
Double Data Type
A Double data type in C is used to store decimal numbers (numbers with floating point values)
with double precision. It is used to define numeric values which hold numbers with decimal
values in C.
The double data type is basically a precision sort of data type that is capable of holding 64 bits
of decimal numbers or floating points. Since double has more precision as compared to that
float then it is much more obvious that it occupies twice the memory occupied by the floating-
point type. It can easily accommodate about 16 to 17 digits after or before a decimal point.
• Range: 1.7E-308 to 1.7E+308
• Size: 8 bytes
• Format Specifier: %lf

Syntax of Double
The variable can be declared as double precision floating point using the double keyword:
double var_name;
Example of Double
// C Program to demonstrate
// use of double data type
#include <stdio.h>

int main()
{
double a = 123123123.00;
double b = 12.293123;
double c = 2312312312.123123;

printf("%lf\n", a);

printf("%lf\n", b);

printf("%lf", c);

return 0;
}

Output
123123123.000000

12.293123
2312312312.123123
Void Data Type
The void data type in C is used to specify that no value is present. It does not provide a result
value to its caller. It has no values and no operations. It is used to represent nothing. Void is
used in multiple ways as function return type, function arguments as void, and pointers to void.
Syntax:
// function return type void
void exit(int check);
// Function without any parameter can accept void.
int print(void);
// memory allocation function which
// returns a pointer to void.
void *malloc (size_t size);
Example of Void
// C program to demonstrate
// use of void pointers

#include <stdio.h>

int main()
{
int val = 30;

void* ptr = &val;


printf("%d", *(int*)ptr);
return 0;
}

Output
30

Size of Data Types in C


The size of the data types in C is dependent on the size of the architecture, so we cannot define
the universal size of the data types. For that, the C language provides the sizeof() operator to
check the size of the data types.
Example
// C Program to print size of
// different data type in C
#include <stdio.h>

int main()
{
int size_of_int = sizeof(int);
int size_of_char = sizeof(char);

int size_of_float = sizeof(float);


int size_of_double = sizeof(double);

printf("The size of int data type : %d\n", size_of_int);


printf("The size of char data type : %d\n",
size_of_char);
printf("The size of float data type : %d\n",

size_of_float);
printf("The size of double data type : %d",
size_of_double);

return 0;

Output
The size of int data type : 4
The size of char data type : 1
The size of float data type : 4
The size of double data type : 8

Different data types also have different ranges up to which they can store numbers. These
ranges may vary from compiler to compiler. Below is a list of ranges along with the memory
requirement and format specifiers on the 32-bit GCC compiler.

Size Format
Data Type (bytes) Range Specifier

short int 2 -32,768 to 32,767 %hd

unsigned short int 2 0 to 65,535 %hu

unsigned int 4 0 to 4,294,967,295 %u

int 4 -2,147,483,648 to 2,147,483,647 %d

long int 4 -2,147,483,648 to 2,147,483,647 %ld


Size Format
Data Type (bytes) Range Specifier

unsigned long int 4 0 to 4,294,967,295 %lu

long long int 8 -(2^63) to (2^63)-1 %lld

unsigned long long 0 to


8 %llu
int 18,446,744,073,709,551,615

signed char 1 -128 to 127 %c

unsigned char 1 0 to 255 %c

float 4 1.2E-38 to 3.4E+38 %f

double 8 1.7E-308 to 1.7E+308 %lf

long double 16 3.4E-4932 to 1.1E+4932 %Lf

Note: The long, short, signed and unsigned are datatype modifier that can be used with some
primitive data types to change the size or length of the datatype.

Abstract Data Types

An Abstract Data Type (ADT) is a conceptual model that defines a set of operations and behaviors
for a data structure, without specifying how these operations are implemented or how data is
organized in memory. The definition of ADT only mentions what operations are to be performed
but not how these operations will be implemented. It does not specify how data will be organized
in memory and what algorithms will be used for implementing the operations. It is called
“abstract” because it provides an implementation-independent view.
The process of providing only the essentials and hiding the details is known as abstraction.

For example, we use primitive values like int, float, and char with the understanding that these
data types can operate and be performed on without any knowledge of their implementation
details. ADTs operate similarly by defining what operations are possible without detailing their
implementation.
Difference Between ADTs and UDTs

User-Defined Data Types


Aspect Abstract Data Types (ADTs) (UDTs)

Defines a class of objects and the


A custom data type created by
operations that can be performed on
combining or extending existing
them, along with their expected
primitive types, specifying both
behavior (semantics), but without
structure and operations.
Definition specifying implementation details.

What operations are allowed and how How data is organized in


they behave, without dictating how memory and how operations
Focus they are implemented. are executed.

Allows programmers to create


Provides an abstract model to define concrete implementations of
data structures in a conceptual way. data structures using primitive
Purpose types.
User-Defined Data Types
Aspect Abstract Data Types (ADTs) (UDTs)

Does not specify how operations are Specifies how to create and
Implementation implemented or how data is organize data types to
Details structured. implement the structure.

Used to implement data


Used to design and conceptualize data structures that realize the
structures. abstract concepts defined by
Usage ADTs.

Structures, classes,
List ADT, Stack ADT, Queue ADT.
Example enumerations, records.

Examples of ADTs
Now, let’s understand three common ADT’s: List ADT, Stack ADT, and Queue ADT.
1. List ADT

The List ADT (Abstract Data Type) is a sequential collection of elements that supports a set of
operations without specifying the internal implementation. It provides an ordered way to store,
access, and modify data.

Vies of list
The List ADT need to store the required data in the sequence and should have the following
operations:
• get(): Return an element from the list at any given position.
• insert(): Insert an element at any position in the list.
• remove(): Remove the first occurrence of any element from a non-empty list.

• removeAt(): Remove the element at a specified location from a non-empty list.


• replace(): Replace an element at any position with another element.
• size(): Return the number of elements in the list.
• isEmpty(): Return true if the list is empty; otherwise, return false.

• isFull(): Return true if the list is full; otherwise, return false.


2. Stack ADT
The Stack ADT is a linear data structure that follows the LIFO (Last In, First Out) principle. It allows
elements to be added and removed only from one end, called the top of the stack.

View of stack
In Stack ADT, the order of insertion and deletion should be according to the FILO or LIFO Principle.
Elements are inserted and removed from the same end, called the top of the stack. It should also
support the following operations:
• push(): Insert an element at one end of the stack called the top.

• pop(): Remove and return the element at the top of the stack, if it is not empty.
• peek(): Return the element at the top of the stack without removing it, if the stack is not
empty.
• size(): Return the number of elements in the stack.
• isEmpty(): Return true if the stack is empty; otherwise, return false.
• isFull(): Return true if the stack is full; otherwise, return false.
3. Queue ADT

The Queue ADT is a linear data structure that follows the FIFO (First In, First Out) principle. It
allows elements to be inserted at one end (rear) and removed from the other end (front).
View of Queue
The Queue ADT follows a design similar to the Stack ADT, but the order of insertion and deletion
changes to FIFO. Elements are inserted at one end (called the rear) and removed from the other
end (called the front). It should support the following operations:
• enqueue(): Insert an element at the end of the queue.
• dequeue(): Remove and return the first element of the queue, if the queue is not empty.
• peek(): Return the element of the queue without removing it, if the queue is not empty.
• size(): Return the number of elements in the queue.

• isEmpty(): Return true if the queue is empty; otherwise, return false.


Features of ADT
Abstract data types (ADTs) are a way of encapsulating data and operations on that data into a
single unit. Some of the key features of ADTs include:
• Abstraction: The user does not need to know the implementation of the data structure
only essentials are provided.
• Better Conceptualization: ADT gives us a better conceptualization of the real world.
• Robust: The program is robust and has the ability to catch errors.
• Encapsulation: ADTs hide the internal details of the data and provide a public interface
for users to interact with the data. This allows for easier maintenance and modification
of the data structure.
• Data Abstraction: ADTs provide a level of abstraction from the implementation details of
the data. Users only need to know the operations that can be performed on the data, not
how those operations are implemented.
• Data Structure Independence: ADTs can be implemented using different data structures,
such as arrays or linked lists, without affecting the functionality of the ADT.
• Information Hiding: ADTs can protect the integrity of the data by allowing access only to
authorized users and operations. This helps prevent errors and misuse of the data.

• Modularity: ADTs can be combined with other ADTs to form larger, more complex data
structures. This allows for greater flexibility and modularity in programming.
Overall, ADTs provide a powerful tool for organizing and manipulating data in a structured and
efficient manner.
Abstract data types (ADTs) have several advantages and disadvantages that should be considered
when deciding to use them in software development. Here are some of the main advantages and
disadvantages of using ADTs:
Advantages and Disadvantages of ADT
Advantage
• Encapsulation: ADTs provide a way to encapsulate data and operations into a single unit,
making it easier to manage and modify the data structure.
• Abstraction: ADTs allow users to work with data structures without having to know the
implementation details, which can simplify programming and reduce errors.

• Data Structure Independence: ADTs can be implemented using different data structures,
which can make it easier to adapt to changing needs and requirements.

• Information Hiding: ADTs can protect the integrity of data by controlling access and
preventing unauthorized modifications.

• Modularity: ADTs can be combined with other ADTs to form more complex data
structures, which can increase flexibility and modularity in programming.
Disadvantages
• Overhead: Implementing ADTs can add overhead in terms of memory and processing,
which can affect performance.
• Complexity: ADTs can be complex to implement, especially for large and complex data
structures.
• Learning Curve: Using ADTs requires knowledge of their implementation and usage,
which can take time and effort to learn.
• Limited Flexibility: Some ADTs may be limited in their functionality or may not be suitable
for all types of data structures.
• Cost: Implementing ADTs may require additional resources and investment, which can
increase the cost of development.
Abstract Data Types (ADTs)- FAQs
What is an Abstract Data Type (ADT)?

An ADT is a model that defines operations and behaviors for a data type without specifying how
these are implemented or how data is stored.
How does an ADT differ from a User-Defined Data Type (UDT)?
ADTs focus on what operations can be performed and their behaviors. UDTs are custom types
created by programmers that can implement ADTs.
Can you give examples of ADTs?
Examples include List ADT, Stack ADT, and Queue ADT.
What are the main features of ADTs?
Key features are abstraction, encapsulation, modularity, and information hiding.

C Structures
In C, a structure is a user-defined data type that can be used to group items of possibly different
types into a single type. The struct keyword is used to define a structure. The items in the structure
are called its member and they can be of any valid data type.
Example:
#include <stdio.h>

// Defining a structure
struct A {
int x;
};

int main() {

// Creating a structure variable


struct A a;

// Initializing member
a.x = 11;

printf("%d", a.x);
return 0;

Output
11
Explanation: In this example, a structure A is defined to hold an integer member x. A variable a of
type struct A is created and its member x is initialized to 11 by accessing it using dot operator.
The value of a.x is then printed to the console.
Structures are used when you want to store a collection of different data types, such as integers,
floats, or even other structures under a single name. To understand how structures are
foundational to building complex data structures, the C Programming Course Online with Data
Structures provides practical applications and detailed explanations.
Syntax of Structure
There are two steps of creating a structure in C:
1. Structure Definition
2. Creating Structure Variables

Structure Definition
A structure is defined using the struct keyword followed by the structure name and its members.
It is also called a structure template or structure prototype, and no memory is allocated to the
structure in the declaration.
struct structure_name {
data_type1 member1;
data_type2 member2;

};
• structure_name: Name of the structure.

• member1, member2, …: Name of the members.


• data_type1, data_type2, …: Type of the members.
Be careful not to forget the semicolon at the end.
Creating Structure Variable
After structure definition, we have to create variable of that structure to use it. It is similar to the
any other type of variable declaration:
struct strcuture_name var;

We can also declare structure variables with structure definition.


struct structure_name {

}var1, var2….;
Basic Operations of Structure
Following are the basic operations commonly used on structures:
1. Access Structure Members
To access or modify members of a structure, we use the ( . ) dot operator. This is applicable when
we are using structure variables directly.
structure_name . member1;
strcuture_name . member2;
In the case where we have a pointer to the structure, we can also use the arrow operator to
access the members.
structure_ptr -> member1
structure_ptr -> member2
2. Initialize Structure Members
Structure members cannot be initialized with the declaration. For example, the following C
program fails in the compilation.
struct structure_name {
data_type1 member1 = value1; // COMPILER ERROR: cannot initialize members here
data_type2 member2 = value2; // COMPILER ERROR: cannot initialize members here

};
The reason for the above error is simple. When a datatype is declared, no memory is allocated for
it. Memory is allocated only when variables are created. So there is no space to store the value
assigned.
We can initialize structure members in 4 ways which are as follows:
Default Initialization
By default, structure members are not automatically initialized to 0 or NULL. Uninitialized
structure members will contain garbage values. However, when a structure variable is declared
with an initializer, all members not explicitly initialized are zero-initialized.
struct structure_name = {0}; // Both x and y are initialized to 0

Initialization using Assignment Operator


struct structure_name str;
str.member1 = value1;
….

Note: We cannot initialize the arrays or strings using assignment operator after variable
declaration.
Initialization using Initializer List
struct structure_name str = {value1, value2, value3 ….};
In this type of initialization, the values are assigned in sequential order as they are declared in the
structure template.
Initialization using Designated Initializer List
Designated Initialization allows structure members to be initialized in any order. This feature has
been added in the C99 standard.
struct structure_name str = { .member1 = value1, .member2 = value2, .member3 = value3 };

The Designated Initialization is only supported in C but not in C++.


#include <stdio.h>

// Defining a structure to represent a student

struct Student {
char name[50];
int age;
float grade;
};

int main() {

// Declaring and initializing a structure


// variable

struct Student s1 = {"Rahul",20, 18.5};

// Designated Initializing another stucture


struct Student s2 = {.age = 18, .name =
"Vikas", .grade = 22};

// Accessing structure members


printf("%s\t%d\t%.2f\n", s1.name, s1.age,
s1.grade);
printf("%s\t%d\t%.2f\n", s2.name, s2.age,

s2.grade);

return 0;
}

Output
Rahul 20 18.50

Vikas 18 22.00
Definition, Types, Complexity and Examples of Algorithm
An algorithm is a well-defined sequential computational technique that accepts a value or a
collection of values as input and produces the output(s) needed to solve a problem.
Or we can say that an algorithm is said to be accurate if and only if it stops with the proper output
for each input instance.
NEED OF THE ALGORITHMS :

Algorithms are used to solve problems or automate tasks in a systematic and efficient manner.
They are a set of instructions or rules that guide the computer or software in performing a
particular task or solving a problem.
There are several reasons why we use algorithms:

• Efficiency: Algorithms can perform tasks quickly and accurately, making them an essential
tool for tasks that require a lot of calculations or data processing.
• Consistency: Algorithms are repeatable and produce consistent results every time they are
executed. This is important when dealing with large amounts of data or complex
processes.

• Scalability: Algorithms can be scaled up to handle large datasets or complex problems,


which makes them useful for applications that require processing large volumes of data.
• Automation: Algorithms can automate repetitive tasks, reducing the need for human
intervention and freeing up time for other tasks.
• Standardization: Algorithms can be standardized and shared among different teams or
organizations, making it easier for people to collaborate and share knowledge.
Overall, algorithms are an essential tool for solving problems in a variety of fields, including
computer science, engineering, data analysis, finance, and many others.
Example:
Consider a box where no one can see what’s happening inside, we say a black box.
We give input to the box and it gives us the output we need but the procedure that we might need
to know behind the conversion of input to desired output is an ALGORITHM.
An algorithm is independent of the language used. It tells the programmer the logic used to solve
the problem. So, it is a logical step-by-step procedure that acts as a blueprint to programmers.
Real-life examples that define the use of algorithms:

• Consider a clock. We know the clock is ticking but how does the manufacturer set those
nuts and bolts so that it keeps on moving every 60 seconds, the min hand should move
and every 60 mins, the hour hand should move? So to solve this problem, there must be
an algorithm behind it.
• Seen someone cooking your favorite food for you? Is the recipe necessary for it? Yes, it is
necessary as a recipe is a sequential procedure that turns a raw potato into a chilly potato.
This is what an algorithm is: following a procedure to get the desired output. Is the
sequence necessary to be followed? Yes, the sequence is the most important thing that
has to be followed to get what we want.
Types of Algorithms:
• Sorting algorithms: Bubble Sort, insertion sort, and many more. These algorithms are
used to sort the data in a particular format.
• Searching algorithms: Linear search, binary search, etc. These algorithms are used in
finding a value or record that the user demands.
• Graph Algorithms: It is used to find solutions to problems like finding the shortest path
between cities, and real-life problems like traveling salesman problems.
Sorting algorithms are algorithms that take a collection of elements and rearrange them in a
specified order (e.g. ascending or descending). There are many different sorting algorithms,
each with its own strengths and weaknesses. Some of the most commonly used sorting
algorithms include:
Bubble sort: A simple sorting algorithm that repeatedly steps through the list, compares adjacent
elements and swaps them if they are in the wrong order.
Insertion sort: A simple sorting algorithm that builds up the final sorted array one item at a time,
by comparing each new item to the items that have already been sorted and inserting it in the
correct position.
Selection sort: A simple sorting algorithm that repeatedly selects the minimum element from the
unsorted part of the array and moves it to the end of the sorted part.
Merge sort: A divide-and-conquer sorting algorithm that works by dividing the unsorted list into
n sub-lists, sorting each sub-list, and then merging them back into a single sorted list.
Quick sort: A divide-and-conquer sorting algorithm that works by selecting a “pivot” element
from the array and partitioning the other elements into two sub-arrays, according to whether
they are less than or greater than the pivot. The sub-arrays are then sorted recursively.
Each of these algorithms has different time and space complexities, making some more suitable
for certain use cases than others.
Searching algorithms are algorithms that search for a particular element or value in a data
structure (such as an array or a linked list). Some of the most commonly used searching
algorithms include:
Linear search: A simple searching algorithm that iterates through every element of a list until it
finds a match.
Binary search: A searching algorithm that works by dividing a sorted list in half repeatedly, until
the desired element is found or it can be determined that the element is not present.
Jump search: A searching algorithm that works by jumping ahead by fixed steps in the list, until
a suitable candidate is found, and then performing a linear search in the surrounding elements.
Interpolation search: A searching algorithm that works by using information about the range of
values in the list to estimate the position of the desired element and then verifying that it is indeed
present.
Hash table search: A searching algorithm that uses a hash function to map elements to indices in
an array, and then performs constant-time lookups in the array to find the desired element.
Each of these algorithms has different time and space complexities, making some more suitable
for certain use cases than others. The choice of which algorithm to use depends on the specific
requirements of the problem, such as the size of the data structure, the distribution of values, and
the desired time complexity.
Graph algorithms are a set of algorithms that are used to process, analyze and understand
graph data structures. Graphs are mathematical structures used to model relationships
between objects, where the objects are represented as vertices (or nodes) and the relationships
between them are represented as edges. Graph algorithms are used in a variety of applications
such as network analysis, social network analysis, recommendation systems, and in many other
areas where understanding the relationships between objects is important. Some of the
common graph algorithms include:
Shortest Path algorithms (e.g. Dijkstra’s, Bellman-Ford, A*)
Minimum Spanning Tree algorithms (e.g. Kruskal, Prim)
Maximum Flow algorithms (e.g. Ford-Fulkerson, Edmonds-Karp)
Network Flow algorithms (e.g. Bipartite Matching)
Connectivity algorithms (e.g. Depth-first Search, Breadth-first Search)
Why do we use algorithms?
Consider two kids, Aman and Rohan, solving the Rubik’s Cube. Aman knows how to solve it in a
definite number of steps. On the other hand, Rohan knows that he will do it but is not aware of
the procedure. Aman solves the cube within 2 minutes whereas Rohan is still stuck and by the end
of the day, he somehow managed to solve it (might have cheated as the procedure is necessary).
So the time required to solve with a procedure/algorithm is much more effective than that without
any procedure. Hence the need for an algorithm is a must.
In terms of designing a solution to an IT problem, computers are fast but not infinitely fast. The
memory may be inexpensive but not free. So, computing time is therefore a bounded resource
and so is the space in memory. So we should use these resources wisely and algorithms that are
efficient in terms of time and space will help you do so.
Creating an Algorithm:
Since the algorithm is language-independent, we write the steps to demonstrate the logic behind
the solution to be used for solving a problem. But before writing an algorithm, keep the following
points in mind:
• The algorithm should be clear and unambiguous.
• There should be 0 or more well-defined inputs in an algorithm.
• An algorithm must produce one or more well-defined outputs that are equivalent to the
desired output. After a specific number of steps, algorithms must ground to a halt.
• Algorithms must stop or end after a finite number of steps.
• In an algorithm, step-by-step instructions should be supplied, and they should be
independent of any computer code.
Example: algorithm to multiply 2 numbers and print the result:
Step 1: Start
Step 2: Get the knowledge of input. Here we need 3 variables; a and b will be the user input and
c will hold the result.
Step 3: Declare a, b, c variables.
Step 4: Take input for a and b variable from the user.
Step 5: Know the problem and find the solution using operators, data structures and logic
We need to multiply a and b variables so we use * operator and assign the result to c.
That is c <- a * b
Step 6: Check how to give output, Here we need to print the output. So write print c
Step 7: End
Example 1: Write an algorithm to find the maximum of all the elements present in the array.
Follow the algorithm approach as below:
Step 1: Start the Program
Step 2: Declare a variable max with the value of the first element of the array.
Step 3: Compare max with other elements using loop.
Step 4: If max < array element value, change max to new max.
Step 5: If no element is left, return or print max otherwise goto step 3.
Step 6: End of Solution
Example 2: Write an algorithm to find the average of 3 subjects.
Follow the algorithm approach as below:
Step 1: Start the Program
Step 2: Declare and Read 3 Subject, let’s say S1, S2, S3
Step 3: Calculate the sum of all the 3 Subject values and store result in Sum variable (Sum =
S1+S2+S3)
Step 4: Divide Sum by 3 and assign it to Average variable. (Average = Sum/3)
Step 5: Print the value of Average of 3 Subjects
Step 6: End of Solution
Know about Algorithm Complexity:
Complexity in algorithms refers to the amount of resources (such as time or memory) required to
solve a problem or perform a task. The most common measure of complexity is time complexity,
which refers to the amount of time an algorithm takes to produce a result as a function of the size
of the input. Memory complexity refers to the amount of memory used by an algorithm. Algorithm
designers strive to develop algorithms with the lowest possible time and memory complexities,
since this makes them more efficient and scalable.
The complexity of an algorithm is a function describing the efficiency of the algorithm in terms of
the amount of data the algorithm must process.
Usually there are natural units for the domain and range of this function.
An algorithm is analyzed using Time Complexity and Space Complexity. Writing an efficient
algorithm help to consume the minimum amount of time for processing the logic. For algorithm
A, it is judged on the basis of two parameters for an input of size n :
• Time Complexity: Time taken by the algorithm to solve the problem. It is measured by
calculating the iteration of loops, number of comparisons etc.
• Time complexity is a function describing the amount of time an algorithm takes in terms
of the amount of input to the algorithm.
• “Time” can mean the number of memory accesses performed, the number of comparisons
between integers, the number of times some inner loop is executed, or some other natural
unit related to the amount of real time the algorithm will take.
• Space Complexity: Space taken by the algorithm to solve the problem. It includes space
used by necessary input variables and any extra space (excluding the space taken by
inputs) that is used by the algorithm. For example, if we use a hash table (a kind of data
structure), we need an array to store values so
• this is an extra space occupied, hence will count towards the space complexity of the
algorithm. This extra space is known as Auxiliary Space.
• Space complexity is a function describing the amount of memory(space)an algorithm
takes in terms of the amount of input to the algorithm.
• Space complexity is sometimes ignored because the space used is minimal and/ or obvious,
but sometimes it becomes an issue as time.
.The time complexity of the operations:
• The choice of data structure should be based on the time complexity of the operations that
will be performed.
• Time complexity is defined in terms of how many times it takes to run a given algorithm,
based on the length of the input.

• The time complexity of an algorithm is the amount of time it takes for each statement to
complete. It is highly dependent on the size of the processed data.

• For example, if you need to perform searches frequently, you should use a binary search
tree.

.The space complexity of the operations:


• The choice of data structure should be based on the space complexity of the operations
that will be performed.
• The amount of memory used by a program to execute it is represented by its space
complexity.
• Because a program requires memory to store input data and temporal values while
running , the space complexity is auxiliary and input space.
• For example, if you need to store a lot of data, you should use an array.
cases in complexities:
There are two commonly studied cases of complexity in algorithms:

1.Best case complexity: The best-case scenario for an algorithm is the scenario in which the
algorithm performs the minimum amount of work (e.g. takes the shortest amount of time, uses
the least amount of memory, etc.).

2.Worst case complexity: The worst-case scenario for an algorithm is the scenario in which the
algorithm performs the maximum amount of work (e.g. takes the longest amount of time, uses
the most amount of memory, etc.).
In analyzing the complexity of an algorithm, it is often more informative to study the worst-case
scenario, as this gives a guaranteed upper bound on the performance of the algorithm. Best-case
scenario analysis is sometimes performed, but is generally less important as it provides a lower
bound that is often trivial to achieve.
Advantages of Algorithms
• Easy to understand: Since it is a stepwise representation of a solution to a given problem,
it is easy to understand.

• Language Independent: It is not dependent on any programming language, so it can


easily be understood by anyone.

• Debug / Error Finding: Every step is independent / in a flow so it will be easy to spot and
fix the error.

• Sub-Problems: It is written in a flow so now the programmer can divide the tasks which
makes them easier to code.
Disadvantages of Algorithms
• Creating efficient algorithms is time-consuming and requires good logical skills.
• Nasty to show branching and looping in algorithms.

Asymptotic Notations: Big O , Big Theta Θ ,Big Omega Ω


1. Big O notation (O):
It is defined as upper bound and upper bound on an algorithm is the most amount of time
required ( the worst case performance).
Big O notation is used to describe the asymptotic upper bound.
Mathematically, if f(n) describes the running time of an algorithm; f(n) is O(g(n)) if there exist
positive constant C and n0 such that,
0 <= f(n) <= Cg(n) for all n >= n0
n = used to give upper bound a function.
If a function is O(n), it is automatically O(n-square) as well.
Graphic example for Big O :

2. Big Omega notation (Ω) :

It is define as lower bound and lower bound on an algorithm is the least amount of time required
( the most efficient way possible, in other words best case).
Just like Onotation provide an asymptotic upper bound, Ωnotation provides asymptotic lower
bound.

Let f(n) define running time of an algorithm;


f(n) is said to be Ω(g (n)) if there exists positive constant C and (n0) such that
0 <= Cg(n) <= f(n) for all n >= n0
n = used to given lower bound on a function
If a function is Ω(n-square) it is automatically Ω(n) as well.

Graphical example for Big Omega (Ω):


3. Big Theta notation (Θ) :
It is define as tightest bound and tightest bound is the best of all the worst case times that the
algorithm can take.
Let f(n) define running time of an algorithm.
f(n) is said to be Θ(g(n)) if f(n) is O(g(n)) and f(n) is Ω(g(n)).
Mathematically,

0 <= f(n) <= C1g(n) for n >= n0


0 <= C2g(n) <= f(n) for n >= n0

Merging both the equation, we get :


0 <= C2g(n) <= f(n) <= C1g(n) for n >= n0
The equation simply means there exist positive constants C1 and C2 such that f(n) is sandwich
between C2 g(n) and C1g(n).

Graphic example of Big Theta (Θ):

Difference Between Big oh, Big Omega and Big Theta :


S.No. Big O Big Omega (Ω) Big Theta (Θ)

It is like (<=) It is like (>=) It is like (==)


rate of growth of an rate of growth is greater meaning the rate of
algorithm is less than or than or equal to a specified growth is equal to a
1. equal to a specific value. value. specified value.

The upper bound of The bounding of function


algorithm is represented by The algorithm’s lower from above and below is
Big O notation. Only the bound is represented by represented by theta
above function is bounded Omega notation. The notation. The exact
by Big O. Asymptotic upper asymptotic lower bound is asymptotic behavior is
bound is given by Big O given by Omega notation. done by this theta
2. notation. notation.

Big Omega (Ω) – Lower Big Theta (Θ) – Tight


Big O – Upper Bound
3. Bound Bound

It is define as lower bound


It is define as upper bound
and lower bound on an It is define as tightest
and upper bound on an
algorithm is the least bound and tightest
algorithm is the most
amount of time required ( bound is the best of all
amount of time required (
the most efficient way the worst case times that
the worst case
possible, in other words the algorithm can take.
performance).
4. best case).

Mathematically: Big Oh is 0 Mathematically: Big Mathematically – Big


<= f(n) <= Cg(n) for all n >= Omega is 0 <= Cg(n) <= f(n) Theta is 0 <= C2g(n) <=
5. n0 for all n >= n0 f(n) <= C1g(n) for n >= n0

little o and little omega notations

The main idea of asymptotic analysis is to have a measure of the efficiency of algorithms that
don’t depend on machine-specific constants, mainly because this analysis doesn’t require
algorithms to be implemented and time taken by programs to be compared. We have already
discussed Three main asymptotic notations. The following 2 more asymptotic notations are used
to represent the time complexity of algorithms.
Little o Asymptotic Notation:
Big O is used as a tight upper bound on the growth of an algorithm’s effort (this effort is described
by the function f(n)), even though, as written, it can also be a loose upper bound. “Little o” (o)
notation is used to describe an upper bound that cannot be tight.
In the domain of algorithm analysis, the little o notation is a valuable tool used to describe the
behavior of functions as they approach certain limits. When we say that a function f(n) is o(g(n)),
we are essentially stating that f(n) grows slower than g(n) as n approaches infinity. In simpler
terms, if f(n) = o(g(n)), it means that g(n) grows faster than f(n).

Thus, Little o means loose upper-bound of f(n). Little o is a rough estimate of the maximum order
of growth whereas Big-O may be the actual order of growth.
Mathematical Relation:

f(n) = o(g(n)) means lim f(n)/g(n) = 0 n→∞


Examples:
Is 7n + 8 ∈ o(n2)
In order for that to be true, for any c, we have to be able to find an n0 that makes f(n) < c * g(n)
asymptotically true.
lets took some example,
If c = 100,we check the inequality is clearly true. If c = 1/100 , we’ll have to use a little more
imagination, but we’ll be able to find an n0. (Try n0 = 1000.) From these examples, the
conjecture appears to be correct.
then check limits,
lim f(n)/g(n) = lim (7n + 8)/(n2) = lim 7/2n = 0 (l’hospital)
n→∞ n→∞ n→∞

hence 7n+8 ∈ o(n2)


Little ω Asymptotic Notation:
On the other hand, little ω notation is used to describe the relationship between two functions
when one grows strictly faster than the other. If a function f(n) is ω(g(n)), it means that g(n) grows
slower than f(n) as n approaches infinity.
f(n) has a higher growth rate than g(n) so main difference between Big Omega (Ω) and Little
omega (ω) lies in their definitions.In the case of Big Omega f(n)=ω(g(n)) and the bound
is 0<=cg(n)<=f(n), but in case of little omega, it is true for 0<=c*g(n)<f(n).
The relationship between Big Omega (Ω) and Little Omega (ω) is similar to that of Big-O and Little
o except that now we are looking at the lower bounds. Little Omega (ω) is a rough estimate of
the order of the growth whereas Big Omega (Ω) may represent exact order of growth. We use
notation to denote a lower bound that is not asymptotically tight, and f(n) ∈ ω(g(n)) if and only
if g(n) ∈ ο((f(n)).

Mathmatical Relation:
if f(n) ∈ ω(g(n)) then, lim f(n)/g(n) = ∞ n→∞
Example:
Prove that 4n+6 ∈ ο(1);
The little omega(ο) running time can be proven by applying limit formula given below.

if lim f(n)/g(n) = ∞ then functions f(n) is ο(g(n))


n→∞
here,we have functions f(n)=4n+6 and g(n)=1
lim (4n+6)/(1) = ∞
n→∞ and,also for any c we can get n0 for this inequality 0<=c*g(n)<f(n) ,
0<=c*1<4n+6. So,hence proved.
Conclusion:
In conclusion, little o and little ω notations are essential tools in algorithm analysis that allow us
to compare the growth rates of functions in a precise manner. By understanding these notations,
we can better analyze and predict the performance of algorithms as their input sizes grow.
Stack Data Structure
Stack is a linear data structure that follows LIFO (Last In First Out) Principle, the last element
inserted is the first to be popped out. It means both insertion and deletion operations happen at
one end only.

LIFO(Last In First Out) Principle


Here are some real world examples of LIFO
• Consider a stack of plates. When we add a plate, we add at the top. When we remove,
we remove from the top.

• A shuttlecock box (or any other box that is closed from one end) is another great real-
world example of the LIFO (Last In, First Out) principle where do insertions and removals
from the same end.
Representation of Stack Data Structure:

Stack follows LIFO (Last In First Out) Principle so the element which is pushed last is popped first.

Types of Stack:
• Fixed Size Stack : As the name suggests, a fixed size stack has a fixed size and cannot grow
or shrink dynamically. If the stack is full and an attempt is made to add an element to it,
an overflow error occurs. If the stack is empty and an attempt is made to remove an
element from it, an underflow error occurs.
• Dynamic Size Stack : A dynamic size stack can grow or shrink dynamically. When the stack
is full, it automatically increases its size to accommodate the new element, and when the
stack is empty, it decreases its size. This type of stack is implemented using a linked list,
as it allows for easy resizing of the stack.
Basic Operations on Stack:
In order to make manipulations in a stack, there are certain operations provided to us.
• push() to insert an element into the stack
• pop() to remove an element from the stack

• top() Returns the top element of the stack.


• isEmpty() returns true if stack is empty else false.
• isFull() returns true if the stack is full else false.
To implement stack, we need to maintain reference to the top item.
Push Operation on Stack

Adds an item to the stack. If the stack is full, then it is said to be an Overflow condition.
Algorithm for Push Operation:
• Before pushing the element to the stack, we check if the stack is full .
• If the stack is full (top == capacity-1) , then Stack Overflows and we cannot insert the
element to the stack.
• Otherwise, we increment the value of top by 1 (top = top + 1) and the new value is
inserted at top position .
• The elements can be pushed into the stack till we reach the capacity of the stack.

Pop Operation in Stack


Removes an item from the stack. The items are popped in the reversed order in which they are
pushed. If the stack is empty, then it is said to be an Underflow condition.
Algorithm for Pop Operation:
• Before popping the element from the stack, we check if the stack is empty .
• If the stack is empty (top == -1), then Stack Underflows and we cannot remove any
element from the stack.
• Otherwise, we store the value at top, decrement the value of top by 1 (top = top – 1) and
return the stored top value.

Top or Peek Operation on Stack


Returns the top element of the stack.

Algorithm for Top Operation:


• Before returning the top element from the stack, we check if the stack is empty.
• If the stack is empty (top == -1), we simply print “Stack is empty”.
• Otherwise, we return the element stored at index = top .

isEmpty Operation in Stack Data Structure:

Returns true if the stack is empty, else false.


Algorithm for isEmpty Operation:
• Check for the value of top in stack.
• If (top == -1), then the stack is empty so return true .
• Otherwise, the stack is not empty so return false .

isFull Operation in Stack Data Structure:


Returns true if the stack is full, else false.
Algorithm for isFull Operation:
• Check for the value of top in stack.

• If (top == capacity-1), then the stack is full so return true.


• Otherwise, the stack is not full so return false.

Implementation of Stack

The basic operations that can be performed on a stack include push, pop, and peek. There are
two ways to implement a stack –
• Implementation of Stack using Array
• Implementation of Stack using Linked List
• Implementation of Stack using Deque
Complexity Analysis of Operations on Stack Data Structure:

Operations Time Complexity Space Complexity

push() O(1) O(1)

pop() O(1) O(1)

top() or peek() O(1) O(1)

isEmpty() O(1) O(1)

isFull() O(1) O(1)

Queue Data Structure


Queue is a linear data structure that follows FIFO (First In First Out) Principle, so the first element
inserted is the first to be popped out.

FIFO Principle in Queue:


FIFO Principle states that the first element added to the Queue will be the first one to be removed
or processed. So, Queue is like a line of people waiting to purchase tickets, where the first person
in line is the first person served. (i.e. First Come First Serve).
Basic Terminologies of Queue
• Front: Position of the entry in a queue ready to be served, that is, the first entry that will
be removed from the queue, is called the front of the queue. It is also referred as
the head of the queue.
• Rear: Position of the last entry in the queue, that is, the one most recently added, is called
the rear of the queue. It is also referred as the tail of the queue.
• Size: Size refers to the current number of elements in the queue.
• Capacity: Capacity refers to the maximum number of elements the queue can hold.
Representation of Queue

Operations on Queue
1. Enqueue:
Enqueue operation adds (or stores) an element to the end of the queue.
Steps:

1. Check if the queue is full. If so, return an overflow error and exit.
2. If the queue is not full, increment the rear pointer to the next available position.
3. Insert the element at the rear.

2. Dequeue:

Dequeue operation removes the element at the front of the queue. The following steps are taken
to perform the dequeue operation:

1. Check if the queue is empty. If so, return an underflow error.


2. Remove the element at the front.
3. Increment the front pointer to the next element.
3. Peek or Front Operation:
This operation returns the element at the front end without removing it.
4. Size Operation:

This operation returns the numbers of elements present in the queue.


5. isEmpty Operation:
This operation returns a boolean value that indicates whether the queue is empty or not.
6. isFull Operation:
This operation returns a boolean value that indicates whether the queue is full or not.

Implementation of Queue Data Structure


Queue can be implemented using following data structures:
• Array implementation of queue
• Implementation of Queue using Linked List
Complexity Analysis of Operations on Queue

Operations Time Complexity Space Complexity

enqueue O(1) O(1)

dequeue O(1) O(1)

front O(1) O(1)

size O(1) O(1)

isEmpty O(1) O(1)

isFull O(1) O(1)

Types of Queues

Queue data structure can be classified into 4 types:


1. Simple Queue: Simple Queue simply follows FIFO Structure. We can only insert the
element at the back and remove the element from the front of the queue.
2. Double-Ended Queue (Deque): In a double-ended queue the insertion and deletion
operations, both can be performed from both ends. They are of two types:
• Input Restricted Queue: This is a simple queue. In this type of queue, the input
can be taken from only one end but deletion can be done from any of the ends.
• Output Restricted Queue: This is also a simple queue. In this type of queue, the
input can be taken from both ends but deletion can be done from only one end.
3. Circular Queue: This is a special type of queue where the last position is connected back
to the first position. Here also the operations are performed in FIFO order.
4. Priority Queue: A priority queue is a special queue where the elements are accessed
based on the priority assigned to them. They are of two types:
• Ascending Priority Queue: In Ascending Priority Queue, the elements are
arranged in increasing order of their priority values. Element with smallest priority
value is popped first.
• Descending Priority Queue: In Descending Priority Queue, the elements are
arranged in decreasing order of their priority values. Element with largest priority
is popped first.
Applications of Queue Data Structure
Application of queue is common. In a computer system, there may be queues of tasks waiting for
the printer, for access to disk storage, or even in a time-sharing system, for use of the CPU. Within
a single program, there may be multiple requests to be kept in a queue, or one task may create
other tasks, which must be done in turn by keeping them in a queue.
• A Queue is always used as a buffer when we have a speed mismatch between a producer
and consumer. For example keyboard and CPU.
• Queue can be used where we have a single resource and multiple consumers like a single
CPU and multiple processes.
• In a network, a queue is used in devices such as a router/switch and mail queue.
• Queue can be used in various algorithm techniques like Breadth First Search, Topological
Sort, etc.
FAQs (Frequently asked questions) on Queue Data Structure:

1. Which principle is followed by Queue data structure?


Queue follows FIFO (First-In-First-Out) principle, so the element which is inserted first will be
deleted first.
2. In data structures, what is a double-ended queue?
In a double-ended queue, elements can be inserted and removed at both ends.
3. What are the four types of Queue data structure?
The four types of Queue are: Simple Queue, Double-ended queue, Circular Queue and Priority
Queue.
4. What are some real-life applications of Queue data structure?

Real-life applications of queue include: Cashier Line in Stores, CPU Scheduling, Disk Scheduling,
Serving requests on a shared resource like Printer.
5. What are some limitations of Queue data structure?
Some of the limitations of Queue are: deletion of some middle element is not possible until all the
elements before it are not deleted. Also random access of any element takes linear time
complexity.
6. What are the different ways to implement a Queue?
Queue data structure can be implemented by using Arrays or by using Linked List. The Array
implementation requires the size of the Queue to be specified at the time of declaration.
7. What are some common operations in Queue data structure?
Some common operations on Queue are: Insertion or enqueue, Deletion or dequeue, front, rear,
isFull, isEmpty, etc.

You might also like