0% found this document useful (0 votes)
4 views

Unit i Notes

This document provides an introduction to data structures, explaining their importance in organizing and storing data efficiently. It categorizes data structures into linear and non-linear types, detailing examples such as arrays, linked lists, stacks, queues, trees, and graphs. Additionally, it discusses time and space complexity, algorithm analysis, and the significance of choosing appropriate data structures for optimizing performance.

Uploaded by

divyapodapati2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Unit i Notes

This document provides an introduction to data structures, explaining their importance in organizing and storing data efficiently. It categorizes data structures into linear and non-linear types, detailing examples such as arrays, linked lists, stacks, queues, trees, and graphs. Additionally, it discusses time and space complexity, algorithm analysis, and the significance of choosing appropriate data structures for optimizing performance.

Uploaded by

divyapodapati2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 54

UNIT-I

1) Introduction to data structures:


A data structure is a particular way of organizing data in a computer so that it can
be used effectively.
The idea is to reduce the space and time complexities of different tasks.
The choice of a good data structure makes it possible to perform a variety of
critical operations effectively. An efficient data structure also uses minimum
memory space and execution time to process the structure.
A data structure is a special way of organizing and storing data in a computer so
that it can be used efficiently. Array, LinkedList, Stack, Queue, Tree, Graph etc are
all data structures that stores the data in a special way so that we can access and
use the data efficiently. Each of these mentioned data structures has a different
special way of organizing data so we choose the data structure based on the
requirement.

Data Structure Types:


We have two types of data structures:

a). Linear Data Structure


b). Non-linear Data Structure

Linear data structures: Elements of Linear data structure are accessed in a


sequential manner, however the elements can be stored in these data structure in
any order. Examples of linear data structure are: LinkedList, Stack, Queue and
Array
Non-linear data structures: Elements of non-linear data structures are stores and
accessed in non-linear order. Examples of non-linear data structure are: Tree and
Graph.

Classification of Data Structure with Diagram:

Array: An array is a collection of data items stored at contiguous memory


locations. The idea is to store multiple items of the same type together.

Memory allocation of an array:


As stated above, all the data elements of an array are stored at contiguous locations
in the main memory. The name of the array represents the base address or the
address of the first element in the main memory. Each element of the array is
represented by proper indexing.

We can define the indexing of an array in the below ways -

1. 0 (zero-based indexing): The first element of the array will be arr[0].

Array declaration along with initialization:


int arr[5]={1,2,3,4,5};

Linked Lists: Like arrays, Linked List is a linear data structure. Unlike arrays,
linked list elements are not stored at a contiguous location; the elements are
linked using pointers.
Stack: Stack is a linear data structure which follows a particular order in which
the operations are performed. The order may be LIFO(Last In First Out) or
FILO(First In Last Out). In stack, all insertion and deletion are permitted at only
one end of the list.

Mainly the following three basic operations are performed in the stack:

 Initialize: Make a stack empty.


 Push: Adds an item in the stack. If the stack is full, then it is said to be an
Overflow condition.
 Pop: Removes an item from the stack. The items are popped in the reversed
order in which they are pushed. If the stack is empty, then it is said to be an
Underflow condition.
 Peek or Top: Returns top element of the stack.
 isEmpty: Returns true if the stack is empty, else false.
Queue: Like Stack, Queue is a linear structure which follows a particular order in
which the operations are performed. The order is First In First Out (FIFO). In the
queue, items are inserted at one end and deleted from the other end. A good
example of the queue is any queue of consumers for a resource where the
consumer that came first is served first. The difference between stacks and
queues is in removing. In a stack we remove the item the most recently added; in
a queue, we remove the item the least recently added.

Mainly the following four basic operations are performed on queue:

 Enqueue: Adds an item to the queue. If the queue is full, then it is said to be
an Overflow condition.
 Dequeue: Removes an item from the queue. The items are popped in the same
order in which they are pushed. If the queue is empty, then it is said to be an
Underflow condition.
 Front: Get the front item from the queue.
 Rear: Get the last item from the queue.
Binary Tree: Unlike Arrays, Linked Lists, Stack and queues, which are linear
data structures, trees are hierarchical data structures. A binary tree is a tree data
structure in which each node has at most two children, which are referred to as
the left child and the right child. It is implemented mainly using Links.
A Binary Tree is represented by a pointer to the topmost node in the tree. If the
tree is empty, then the value of root is NULL. A Binary Tree node contains the
following parts.
1. Data

2. Pointer to left child

3. Pointer to the right child

Binary Search Tree: A Binary Search Tree is a Binary Tree following the
additional properties:

 The left part of the root node contains keys less than the root node key.
 The right part of the root node contains keys greater than the root node key.
 There is no duplicate key present in the binary tree.
A Binary tree having the following properties is known as Binary search tree
(BST).

Heap: A Heap is a special Tree-based data structure in which the tree is a


complete binary tree. Generally, Heaps can be of two types:

 Max-Heap: In a Max-Heap the key present at the root node must be greatest
among the keys present at all of its children. The same property must be
recursively true for all sub-trees in that Binary Tree.
 Min-Heap: In a Min-Heap the key present at the root node must be minimum
among the keys present at all of its children. The same property must be
recursively true for all sub-trees in that Binary Tree.
Why we need data structures?
Speed: Speed is an important factor when we are processing the data. Suppose an
organization has thousands of employee records and they want to look up to an
employee record, the process needs to be fast as possible to save time, nobody is
willing to spend 10 minutes to look up a record in database.

Data structures allows faster retrieval, addition, deletion and modification of


records.

Save disk space: When we are dealing with large amount of data, it becomes
important to save the disk space as much as possible. Data structure allows
efficient storing of data thus allows disk space saving.

Handling multiple requests: Data structure allows multiple data processing


requests. For example, one person fetching the record while other person is
simultaneously updating the data. Data structure allows multiple users to handle,
access and modify data simultaneously.

Advantages of Data Structures:


Efficiency: Efficiency of a program depends upon the choice of data structures.
For example: suppose, we have some data and we need to perform the search for a
particular record. In that case, if we organize our data in an array, we will have to
search sequentially element by element. hence, using array may not be very
efficient here. There are better data structures which can make the search process
efficient like ordered array, binary search tree or hash tables.
Reusability: Data structures are reusable, i.e. once we have implemented a
particular data structure, we can use it at any other place. Implementation of data
structures can be compiled into libraries which can be used by different clients.

Abstraction: Data structure is specified by the ADT which provides a level of


abstraction. The client program uses the data structure through interface only,
without getting into the implementation details.
Operations on data structure:
1) Traversing: Every data structure contains the set of data elements. Traversing
the data structure means visiting each element of the data structure in order to
perform some specific operation like searching or sorting.

Example: If we need to calculate the average of the marks obtained by a student in


6 different subject, we need to traverse the complete array of marks and calculate
the total sum, then we will devide that sum by the number of subjects i.e. 6, in
order to find the average.

2) Insertion: Insertion can be defined as the process of adding the elements to the
data structure at any location.

If the size of data structure is n then we can only insert n-1 data elements into it.

3) Deletion:The process of removing an element from the data structure is called


Deletion. We can delete an element from the data structure at any random location.

If we try to delete an element from an empty data structure then underflow occurs.
4) Searching: The process of finding the location of an element within the data
structure is called Searching. There are two algorithms to perform searching,
Linear Search and Binary Search. We will discuss each one of them later in this
tutorial.

5) Sorting: The process of arranging the data structure in a specific order is known
as Sorting. There are many algorithms that can be used to perform sorting, for
example, insertion sort, selection sort, bubble sort, etc.

6) Merging: When two lists List A and List B of size M and N respectively, of
similar type of elements, clubbed or joined to produce the third list, List C of size
(M+N), then this process is called merging

2)What Is Time Complexity?


Time complexity is defined in terms of how many times it takes to run a given
algorithm, based on the length of the input. Time complexity is not a measurement
of how much time it takes to execute a particular algorithm because such factors as
programming language, operating system, and processing power are also
considered.

Time complexity is a type of computational complexity that describes the time


required to execute an algorithm. The time complexity of an algorithm is the
amount of time it takes for each statement to complete. As a result, it is highly
dependent on the size of the processed data. It also aids in defining an algorithm's
effectiveness and evaluating its performance.

What Is Space Complexity?


When an algorithm is run on a computer, it necessitates a certain amount of
memory space. The amount of memory used by a program to execute it is
represented by its space complexity. Because a program requires memory to store
input data and temporal values while running, the space complexity is auxiliary
and input space.

Algorithm Analysis:

Analysis of efficiency of an algorithm can be performed at two different stages,


before implementation and after implementation, as

A priori analysis − This is defined as theoretical analysis of an algorithm.


Efficiency of algorithm is measured by assuming that all other factors e.g. speed
of processor, are constant and have no effect on implementation.

A posterior analysis − This is defined as empirical analysis of an algorithm. The


chosen algorithm is implemented using programming language. Next the chosen
algorithm is executed on target computer machine. In this analysis, actual statistics
like running time and space needed are collected.

Algorithm analysis is dealt with the execution or running time of various


operations involved. Running time of an operation can be defined as number of
computer instructions executed per operation.

Algorithm Complexity:
Suppose X is treated as an algorithm and N is treated as the size of input data, the
time and space implemented by the Algorithm X are the two main factors which
determine the efficiency of X.

Time Factor − The time is calculated or measured by counting the number of key
operations such as comparisons in sorting algorithm.

Space Factor − The space is calculated or measured by counting the maximum


memory space required by the algorithm.

The complexity of an algorithm f(N) provides the running time and / or storage
space needed by the algorithm with respect of N as the size of input data.

Space Complexity:

Space complexity of an algorithm represents the amount of memory space needed


the algorithm in its life cycle.

Space needed by an algorithm is equal to the sum of the following two


components

A fixed part that is a space required to store certain data and variables (i.e. simple
variables and constants, program size etc.), that are not dependent of the size of
the problem.

A variable part is a space required by variables, whose size is totally dependent on


the size of the problem. For example, recursion stack space, dynamic memory
allocation etc.

Space complexity S(p) of any algorithm p is S(p) = A + Sp(I) Where A is treated


as the fixed part and S(I) is treated as the variable part of the algorithm which
depends on instance characteristic I. Following is a simple example that tries to
explain the concept

Algorithm

SUM(P, Q)

Step 1 - START

Step 2 - R ← P + Q + 10

Step 3 - Stop

Here we have three variables P, Q and R and one constant. Hence S(p) = 1+3.
Now space is dependent on data types of given constant types and variables and it
will be multiplied accordingly.

Time Complexity:

Time complexity of an algorithm signifies the total time required by the program to
run till its completion.
Time Complexity is most commonly estimated by counting the number of
elementary steps performed by any algorithm to finish execution. Like in the
example above, for the first code the loop will run n number of times, so the time
complexity will be n atleast and as the value of n will increase the time taken will
also increase. While for the second code, time complexity is constant, because it
will never be dependent on the value of n, it will always give the result in 1 step.
And since the algorithm's performance may vary with different types of input data,
hence for an algorithm we usually use the worst-case Time complexity of an
algorithm because that is the maximum time taken for any input size.

Calculating Time Complexity:


Now lets tap onto the next big topic related to Time complexity, which is How to
Calculate Time Complexity. It becomes very confusing some times, but we will try
to explain it in the simplest way.

Now the most common metric for calculating time complexity is Big O notation.
This removes all constant factors so that the running time can be estimated in
relation to N, as N approaches infinity. In general you can think of it like this :

statement;
Above we have a single statement. Its Time Complexity will be Constant. The
running time of the statement will not change in relation to N.

for(i=0; i < N; i++)


{
statement;
}
The time complexity for the above algorithm will be Linear. The running time of
the loop is directly proportional to N. When N doubles, so does the running time.

for(i=0; i < N; i++)


{
for(j=0; j < N;j++)
{
statement;
}
}
This time, the time complexity for the above code will be Quadratic. The running
time of the two loops is proportional to the square of N. When N doubles, the
running time increases by N * N.

while(low <= high)


{
mid = (low + high) / 2;
if (target < list[mid])
high = mid - 1;
else if (target > list[mid])
low = mid + 1;
else break;
}

This is an algorithm to break a set of numbers into halves, to search a particular


field(we will study this in detail later). Now, this algorithm will have a
Logarithmic Time Complexity. The running time of the algorithm is proportional
to the number of times N can be divided by 2(N is high-low here). This is because
the algorithm divides the working area in half with each iteration.

void quicksort(int list[], int left, int right)


{
int pivot = partition(list, left, right);
quicksort(list, left, pivot - 1);
quicksort(list, pivot + 1, right);
}
Taking the previous algorithm forward, above we have a small logic of Quick
Sort(we will study this in detail later). Now in Quick Sort, we divide the list into
halves every time, but we repeat the iteration N times(where N is the size of list).
Hence time complexity will be N*log( N ). The running time consists of N loops
(iterative or recursive) that are logarithmic, thus the algorithm is a combination of
linear and logarithmic.

NOTE: In general, doing something with every item in one dimension is linear,
doing something with every item in two dimensions is quadratic, and dividing the
working area in half is logarithmic.

Big O
Notation Name Example(s)

# Odd or Even number,


O(1) Constant
# Look-up table (on average)

# Finding element on sorted array


O(log n) Logarithmic
with binary search

# Find max element in unsorted array,


O(n) Linear
# Duplicate elements in array with Hash Map

Linearithmi
O(n log n) # Sorting elements in array with merge sort
c
Big O
Notation Name Example(s)

# Duplicate elements in array **(naïve)**,


O(n2) Quadratic
# Sorting array with bubble sort

O(n3) Cubic # 3 variables equation solver

O(2n) Exponential # Find all subsets

O(n!) Factorial # Find all permutations of a given set/string

O(1) < O(log n) < O(n) < O(n logn) < O(n2) < O(n3) < - - - - - - - < O(2n)
.
The order of time complexities from best to worst.

3)Asymptotic Analysis:
The efficiency of an algorithm depends on the amount of time, storage and other
resources required to execute the algorithm. The efficiency is measured with the
help of asymptotic notations.

An algorithm may not have the same performance for different types of inputs.
With the increase in the input size, the performance will change.

The study of change in performance of the algorithm with the change in the order
of the input size is defined as asymptotic analysis.

Asymptotic Notations
Asymptotic notations are the mathematical notations used to describe the running
time of an algorithm when the input tends towards a particular value or a limiting
value.

For example: In bubble sort, when the input array is already sorted, the time taken
by the algorithm is linear i.e. the best case.

But, when the input array is in reverse condition, the algorithm takes the maximum
time (quadratic) to sort the elements i.e. the worst case.

When the input array is neither sorted nor in reverse order, then it takes average
time. These durations are denoted using asymptotic notations.

There are mainly three asymptotic notations:

Big-O notation
Omega notation
Theta notation

Big-O Notation (O-notation)


Big-O notation represents the upper bound of the running time of an algorithm.
Thus, it gives the worst-case complexity of an algorithm.
O(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
The above expression can be described as a function f(n) belongs to the set O(g(n))
if there exists a positive constant c such that it lies between 0 and cg(n), for
sufficiently large n.
For any value of n, the running time of an algorithm does not cross the time
provided by O(g(n)).

Since it gives the worst-case running time of an algorithm, it is widely used to


analyze an algorithm as we are always interested in the worst-case scenario.

Omega Notation (Ω-notation):


Omega notation represents the lower bound of the running time of an algorithm.
Thus, it provides the best case complexity of an algorithm.
Ω(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
The above expression can be described as a function f(n) belongs to the set Ω(g(n))
if there exists a positive constant c such that it lies above cg(n), for sufficiently
large n.

For any value of n, the minimum time required by the algorithm is given by
Omega Ω(g(n)).

Theta Notation (Θ-notation)


Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average-case complexity of an algorithm.
For a function g(n), Θ(g(n)) is given by the relation:

Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0


such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }
The above expression can be described as a function f(n) belongs to the set Θ(g(n))
if there exist positive constants c1 and c2 such that it can be sandwiched between
c1g(n) and c2g(n), for sufficiently large n.

If a function f(n) lies anywhere in between c1g(n) and c2g(n) for all n ≥ n0, then
f(n) is said to be asymptotically tight bound.

TIME COMPLEXITY is mainly analyzed for


1)worst case scenarios
2)Very large input size.

Stack:
Stack is a linear data structure that follows the LIFO (Last In First Out) principle,
where it performs all operations. It performs insertion and deletion operations on
the stack from only one end from the top of the stack. Inserting a new element on
the top of the stack is known as push operation, and deleting a data element from
the top of the stack is known as pop operation. You can perform the
implementation of the stack in memory using two data structures: stack
implementation using array and stack implementation using linked-list.

Stack Implementation Using Array:

In Stack implementation using arrays, it forms the stack using the arrays. All the
operations regarding the stack implementation using arrays.
Basic Operations on Stack
In order to make manipulations in a stack, there are certain operations provided to
us.

push() to insert an element into the stack


pop() to remove an element from the stack
top() Returns the top element of the stack.
isEmpty() returns true is stack is empty else false
size() returns the size of stack
Stack Operations using Array
A stack can be implemented using array as follows...

Before implementing actual operations, first follow the below steps to create an
empty stack.

Step 1 - Include all the header files which are used in the program and define a
constant 'SIZE' with specific value.
Step 2 - Declare all the functions used in stack implementation.
Step 3 - Create a one dimensional array with fixed size (int stack[SIZE])
Step 4 - Define a integer variable 'top' and initialize with '-1'. (int top = -1)
Step 5 - In main method, display menu with list of operations and make suitable
function calls to perform operation selected by the user on the stack.
push(value) - Inserting value into the stack
In a stack, push() is a function used to insert an element into the stack. In a stack,
the new element is always inserted at top position. Push function takes one integer
value as parameter and inserts that value into the stack. We can use the following
steps to push an element on to the stack...

Step 1 - Check whether stack is FULL. (top == SIZE-1)


Step 2 - If it is FULL, then display "Stack is FULL!!! Insertion is not possible!!!"
and terminate the function.
Step 3 - If it is NOT FULL, then increment top value by one (top++) and set
stack[top] to value (stack[top] = value).
pop() - Delete a value from the Stack
In a stack, pop() is a function used to delete an element from the stack. In a stack,
the element is always deleted from top position. Pop function does not take any
value as parameter. We can use the following steps to pop an element from the
stack...

Step 1 - Check whether stack is EMPTY. (top == -1)


Step 2 - If it is EMPTY, then display "Stack is EMPTY!!! Deletion is not
possible!!!" and terminate the function.
Step 3 - If it is NOT EMPTY, then delete stack[top] and decrement top value by
one (top--).
display() - Displays the elements of a Stack
We can use the following steps to display the elements of a stack...

Step 1 - Check whether stack is EMPTY. (top == -1)


Step 2 - If it is EMPTY, then display "Stack is EMPTY!!!" and terminate the
function.
Step 3 - If it is NOT EMPTY, then define a variable 'i' and initialize with top.
Display stack[i] value and decrement i value by one (i--).
Step 3 - Repeat above step until i value becomes '0'.
Push:
Adds an item to the stack. If the stack is full, then it is said to be an Overflow
condition.

Algorithm for push:

begin
if stack is full
return
endif
else
increment top
stack[top] assign value
end else
end procedure
Pop:
Removes an item from the stack. The items are popped in the reversed order in
which they are pushed. If the stack is empty, then it is said to be an Underflow
condition.

Algorithm for pop:

begin
if stack is empty
return
endif
else
store value of stack[top]
decrement top
return value
end else
end procedure
Top:
Returns the top element of the stack.

Algorithm for Top:


begin
return stack[top]
end procedure

isEmpty:
Returns true if the stack is empty, else false.

Algorithm for isEmpty:

begin
if top < 1
return true
else
return false
end procedure
Understanding stack practically:
There are many real-life examples of a stack. Consider the simple example of
plates stacked over one another in a canteen. The plate which is at the top is the
first one to be removed, i.e. the plate which has been placed at the bottommost
position remains in the stack for the longest period of time. So, it can be simply
seen to follow the LIFO/FILO order.
Complexity Analysis:
 Time Complexity

Operations Complexity

push() O(1)
Operations Complexity

pop() O(1)

isEmpty() O(1)

size() O(1)

Stack using arrays in C:

#include<stdio.h>
#define N 10
int stack[N], top = -1;
void push(int x){
if(top == N-1)
printf("\nStack is Full!!! Insertion is not possible!!!");
else{
top++;
stack[top] = x;
printf("\nInsertion success!!!");
}
}
void pop(){
if(top == -1)
printf("\nStack is Empty!!! Deletion is not possible!!!");
else{
printf("\nDeleted : %d", stack[top]);
top--;
}
}
void display(){
if(top == -1)
printf("\nStack is Empty!!!");
else{
int i;
printf("\nStack elements are:\n");
for(i=top; i>=0; i--)
printf("%d\n",stack[i]);
}
}

void main()
{
int x, choice;
while(choice!=4){
printf("\n\n***** MENU *****\n");
printf("1. Push\n2. Pop\n3. Display\n4. Exit");
printf("\nEnter your choice: ");
scanf("%d",&choice);
switch(choice){
case 1: printf("Enter the value to be insert: ");
scanf("%d",&x);
push(x);
break;
case 2: pop();
break;
case 3: display();
break;
case 4: exit(0);
default: printf("invalid choice");
}
}
}

Advantages of Stack:

Stack helps in managing data that follows the LIFO technique.


Stacks are be used for systematic Memory Management.
It is used in many virtual machines like JVM.
When a function is called, the local variables and other function parameters are
stored in the stack and automatically destroyed once returned from the function.
Hence, efficient function management.
Stacks are more secure and reliable as they do not get corrupted easily.
Stack allows control over memory allocation and deallocation.
Stack cleans up the objects automatically.
Disadvantages of Stack:

Stack memory is of limited size.


The total of size of the stack must be defined before.
If too many objects are created then it can lead to stack overflow.
Random accessing is not possible in stack.
If the stack falls outside the memory it can lead to abnormal termination.
Applications of stack:

Evaluation of Arithmetic Expressions


Backtracking
Delimiter Checking
Reverse a Data
Processing Function Calls

1. Evaluation of Arithmetic Expressions


A stack is a very effective data structure for evaluating arithmetic expressions in
programming languages. An arithmetic expression consists of operands and
operators.
\In addition to operands and operators, the arithmetic expression may also include
parenthesis like "left parenthesis" and "right parenthesis".

Example: A + (B - C)
To evaluate the expressions, one needs to be aware of the standard precedence
rules for arithmetic expression. The precedence rules for operators are:

Category Operator Associativity

Postfix () [] -> . ++ - - Left to right

Unary + - ! ~ ++ - - (type)* & sizeof Right to left

Multiplicative */% Left to right

Additive +- Left to right

Shift << >> Left to right

Relational < <= > >= Left to right

Equality == != Left to right

Bitwise AND & Left to right

Bitwise XOR ^ Left to right

Bitwise OR | Left to right

Logical AND && Left to right

Logical OR || Left to right

Conditional ?: Right to left

Assignment = += -= *= /= %=>>= <<= &= ^= |= Right to left

Comma , Left to right


Notations for Arithmetic Expression
There are three notations to represent an arithmetic expression:

Infix Notation
Prefix Notation
Postfix Notation
Infix Notation

The infix notation is a convenient way of writing an expression in which each


operator is placed between the operands. Infix expressions can be parenthesized or
unparenthesized depending upon the problem requirement.

Example: A + B, (C - D) etc.

All these expressions are in infix notation because the operator comes between the
operands.
Prefix Notation

The prefix notation places the operator before the operands. This notation was
introduced by the Polish mathematician and hence often referred to as polish
notation.

Example: + A B, -CD etc.


All these expressions are in prefix notation because the operator comes before the
operands.
Postfix Notation:
The postfix notation places the operator after the operands. This notation is just the
reverse of Polish notation and also known as Reverse Polish notation.

Example: AB +, CD+, etc.

All these expressions are in postfix notation because the operator comes after the
operands.
Conversion of Arithmetic Expression into various Notations:

Infix Notation Prefix Notation Postfix Notation

A*B *AB AB*

(A+B)/C /+ ABC AB+C/

(A*B) + (D-C) +*AB - DC AB*DC-+

Rules for the conversion of infix to prefix expression:

First, reverse the infix expression given in the problem.

Scan the expression from left to right.

Whenever the operands arrive, print them.


If the operator arrives and the stack is found to be empty, then simply push the
operator into the stack.

If the incoming operator has higher precedence than the TOP of the stack, push the
incoming operator into the stack.

If the incoming operator has the same precedence with a TOP of the stack, check
the associativity if it is L to R, then push the incoming operator into the stack.

If the incoming operator has lower precedence than the TOP of the stack, pop, and
print the top of the stack. Test the incoming operator against the top of the stack
again and pop the operator from the stack till it finds the operator of a lower
precedence or same precedence.

If the incoming operator has the same precedence with the top of the stack and the
incoming operator is ^, then pop the top of the stack till the condition is true. If the
condition is not true, push the ^ operator.

When we reach the end of the expression, pop, and print all the operators from the
top of the stack.

If the operator is ')', then push it into the stack.

If the operator is '(', then pop all the operators from the stack till it finds ) opening
bracket in the stack.

If the top of the stack is ')', push the operator on the stack.

At the end, reverse the output.

Example for infix to prefix:


Example

Input Expression: (O^P)*W/U/V*T+Q

Reversed I/P Expression: Q + T * V / U / W * ) P ^ O (

Reversed I/P Expression Stack Output Expression

Q Empty Q

+ + Q

T + QT

* +* QT

V +* QTV

/ +*/ QTV

U +*/ QTVU

/ +*// QTVU

W +*// QTVUW

* +*//* QTVUW

) +*//*) QTVUW
P +*//*) QTVUWP

^ +*//*)^ QTVUWP

O +*//*)^ QTVUWPO

( +*//*) QTVUWPO^

+*//* QTVUWPO^

EMPTY QTVUWPO^*//*+

Reversed the O/P Expression: +* //*^OPWUVTQ

Infix to Postfix Conversion:

Rules:

Print the operand as they arrive.

If the stack is empty or contains a left parenthesis on top, push the incoming
operator on to the stack.

If the incoming symbol is '(', push it on to the stack.

If the incoming symbol is ')', pop the stack and print the operators until the left
parenthesis is found.

If the incoming symbol has higher precedence than the top of the stack, push it on
the stack.
If the incoming symbol has lower precedence than the top of the stack, pop and
print the top of the stack. Then test the incoming operator against the new top of
the stack.

If the incoming operator has the same precedence with the top of the stack then use
the associativity rules. If the associativity is from left to right then pop and print
the top of the stack then push the incoming operator. If the associativity is from
right to left then push the incoming operator.

At the end of the expression, pop and print all the operators of the stack.

Let's understand through an example.

Infix expression: K + L - M*N + (O^P) * W/U/V * T + Q

Input Expression Stack Postfix Expression

K K

+ +

L + KL

- - K L+

M - K L+ M
* -* K L+ M

N -* KL+MN

+ + K L + M N*
K L + M N* -

( +( K L + M N *-

O +( KL+MN*-O

^ +(^ K L + M N* - O

P +(^ K L + M N* - O P

) + K L + M N* - O P ^

* +* K L + M N* - O P ^

W +* K L + M N* - O P ^ W

/ +/ K L + M N* - O P ^ W *

U +/ K L + M N* - O P ^W*U

/ +/ K L + M N* - O P ^W*U/

V +/ KL + MN*-OP^W*U/V

* +* KL+MN*-OP^W*U/V/

T +* KL+MN*-OP^W*U/V/T
+ + KL+MN*-OP^W*U/V/T*
KL+MN*-OP^W*U/V/T*+

Q + KL+MN*-OP^W*U/V/T*Q

KL+MN*-OP^W*U/V/T*+Q+

The final postfix expression of infix expression(K + L - M*N + (O^P) * W/U/V *


T + Q) is KL+MN*-OP^W*U/V/T*+Q+.

2. Backtracking

Backtracking is another application of Stack. It is a recursive algorithm that is used


for solving the optimization problem.

3. Delimiter Checking

The common application of Stack is delimiter checking, i.e., parsing that involves
analyzing a source program syntactically. It is also called parenthesis checking.
When the compiler translates a source program written in some programming
language such as C, C++ to a machine language, it parses the program into
multiple individual parts such as variable names, keywords, etc. By scanning from
left to right. The main problem encountered while translating is the unmatched
delimiters. We make use of different types of delimiters include the parenthesis
checking (,), curly braces {,} and square brackets [,], and common delimiters /*
and */. Every opening delimiter must match a closing delimiter, i.e., every opening
parenthesis should be followed by a matching closing parenthesis. Also, the
delimiter can be nested. The opening delimiter that occurs later in the source
program should be closed before those occurring earlier.
Valid Delimiter Invalid Delimiter

While ( i > 0) While ( i >

/* Data Structure */ /* Data Structure

{ ( a + b) - c } { ( a + b) - c

To perform a delimiter checking, the compiler makes use of a stack. When a


compiler translates a source program, it reads the characters one at a time, and if it
finds an opening delimiter it places it on a stack. When a closing delimiter is
found, it pops up the opening delimiter from the top of the Stack and matches it
with the closing delimiter.

On matching, the following cases may arise.

o If the delimiters are of the same type, then the match is considered
successful, and the process continues.
o If the delimiters are not of the same type, then the syntax error is reported.

When the end of the program is reached, and the Stack is empty, then the
processing of the source program stops.

Example: To explain this concept, let's consider the following expression.

[{a -b) * (c -d)}/f]


4. Reverse a Data:

To reverse a given set of data, we need to reorder the data so that the first and last
elements are exchanged, the second and second last element are exchanged, and so
on for all other elements.

Example: Suppose we have a string Welcome, then on reversing it would be


Emoclew.

There are different reversing applications:

o Reversing a string
o Converting Decimal to Binary

Reverse a String

A Stack can be used to reverse the characters of a string. This can be achieved by
simply pushing one by one each character onto the Stack, which later can be
popped from the Stack one by one. Because of the last in first out property of the
Stack, the first character of the Stack is on the bottom of the Stack and the last
character of the String is on the Top of the Stack and after performing the pop
operation in the Stack, the Stack returns the String in Reverse order.

5. Processing Function Calls:

Stack plays an important role in programs that call several functions in succession.
Suppose we have a program containing three functions: A, B, and C. function A
invokes function B, which invokes the function C.
When we invoke function A, which contains a call to function B, then its
processing will not be completed until function B has completed its execution and
returned. Similarly for function B and C. So we observe that function A will only
be completed after function B is completed and function B will only be completed
after function C is completed. Therefore, function A is first to be started and last to
be completed. To conclude, the above function activity matches the last in first out
behavior and can easily be handled using Stack.

Consider addrA, addrB, addrC be the addresses of the statements to which control
is returned after completing the function A, B, and C, respectively.
The above figure shows that return addresses appear in the Stack in the reverse
order in which the functions were called. After each function is completed, the pop
operation is performed, and execution continues at the address removed from the
Stack. Thus the program that calls several functions in succession can be handled
optimally by the stack data structure. Control returns to each function at a correct
place, which is the reverse order of the calling sequence.

QUEUE:

Queue Datastructure Using Array

A queue data structure can be implemented using one dimensional array. The
queue implemented using array stores only fixed number of data values. The
implementation of queue data structure using array is very simple. Just define a
one dimensional array of specific size and insert or delete the values into that array
by using FIFO (First In First Out) principle with the help of
variables 'front' and 'rear'. Initially both 'front' and 'rear' are set to -1. Whenever,
we want to insert a new value into the queue, increment 'rear' value by one and
then insert at that position. Whenever we want to delete a value from the queue,
then delete the element which is at 'front' position and increment 'front' value by
one.

Queue Operations using Array

Queue data structure using array can be implemented as follows...

Before we implement actual operations, first follow the below steps to create an
empty queue.

 Step 1 - Include all the header files which are used in the program and
define a constant 'SIZE' with specific value.
 Step 2 - Declare all the user defined functions which are used in queue
implementation.
 Step 3 - Create a one dimensional array with above defined SIZE (int
queue[SIZE])
 Step 4 - Define two integer variables 'front' and 'rear' and initialize both
with '-1'. (int front = -1, rear = -1)
 Step 5 - Then implement main method by displaying menu of operations list
and make suitable function calls to perform operation selected by the user on
queue.

enQueue(value) - Inserting value into the queue

In a queue data structure, enQueue() is a function used to insert a new element into
the queue. In a queue, the new element is always inserted at rear position. The
enQueue() function takes one integer value as a parameter and inserts that value
into the queue. We can use the following steps to insert an element into the queue...

 Step 1 - Check whether queue is FULL. (rear == SIZE-1)


 Step 2 - If it is FULL, then display "Queue is FULL!!! Insertion is not
possible!!!" and terminate the function.
 Step 3 - If it is NOT FULL, then increment rear value by one (rear++) and
set queue[rear] = value.

deQueue() - Deleting a value from the Queue

In a queue data structure, deQueue() is a function used to delete an element from


the queue. In a queue, the element is always deleted from front position. The
deQueue() function does not take any value as parameter. We can use the
following steps to delete an element from the queue...

 Step 1 - Check whether queue is EMPTY. (front == rear)


 Step 2 - If it is EMPTY, then display "Queue is EMPTY!!! Deletion is not
possible!!!" and terminate the function.
 Step 3 - If it is NOT EMPTY, then increment the front value by one (front
++). Then display queue[front] as deleted element. Then check whether
both front and rear are equal (front == rear), if it TRUE, then set
both front and rear to '-1' (front = rear = -1).

display() - Displays the elements of a Queue

We can use the following steps to display the elements of a queue...

 Step 1 - Check whether queue is EMPTY. (front == rear)


 Step 2 - If it is EMPTY, then display "Queue is EMPTY!!!" and terminate
the function.
 Step 3 - If it is NOT EMPTY, then define an integer variable 'i' and set
'i = front+1'.
 Step 4 - Display 'queue[i]' value and increment 'i' value by one (i++).
Repeat the same until 'i' value reaches to rear (i <= rear)

Queue implementation using array:

#include<stdio.h>
#define N 10

int queue[N], front = -1, rear = -1;


int enQueue(int x){
if(rear == N-1)
printf("\nQueue is Full!!! Insertion is not possible!!!");
else if(front==-1 && rear==-1){
front=0;
rear=0;
queue[rear] = x;
printf("\nInsertion success!!!");
}
else{
rear++;
queue[rear]=x;
}
}
int deQueue(){
if(front==-1&&rear==-1)
printf("\nQueue is Empty!!! Deletion is not possible!!!");
else if(front==rear){
printf("%d",queue[front]);
front=rear=-1;
}
else{
printf("%d",queue[front]);
front++;

}
}
int display()
{
int i;
if(front==-1 &&rear ==-1)
{
printf("queue is empty");

}
else
{
for(i=front;i<rear+1;i++)
{
printf("%d",queue[i]);
}
}

}
int main()
{
int x, choice;

while(choice!=4){
printf("\n\n***** MENU *****\n");
printf("1. Insertion\n2. Deletion\n3. Display\n4. Exit");
printf("\nEnter your choice: ");
scanf("%d",&choice);
switch(choice){
case 1: printf("Enter the value to be insert: ");
scanf("%d",&x);
enQueue(x);
break;
case 2: deQueue();
break;
case 3: display();
break;
case 4: exit(0);
default: printf("\nWrong selection!!! Try again!!!");
}
}
}
Implementation of Queue Data Structure

Queue can be implemented using an Array, Stack or Linked List. The easiest way
of implementing a queue is by using an Array.

Initially the head(FRONT) and the tail(REAR) of the queue points at the first
index of the array (starting the index of array from 0). As we add elements to the
queue, the tail keeps on moving ahead, always pointing to the position where the
next element will be inserted, while the head remains at the first index.
When we remove an element from Queue, we can follow two possible approaches
(mentioned [A] and [B] in above diagram). In [A] approach, we remove the
element at head position, and then one by one shift all the other elements in
forward position.

In approach [B] we remove the element from head position and then move head to
the next position.

In approach [A] there is an overhead of shifting the elements one position


forward every time we remove the first element.
In approach [B] there is no such overhead, but whenever we move head one
position ahead, after removal of first element, the size on Queue is reduced by
one space each time.

Application of Queue in Data Structure:

Managing requests on a single shared resource such as CPU scheduling and disk
scheduling.

Handling hardware or real-time systems interrupts.

Handling website traffic.

Routers and switches in networking.

Maintaining the playlist in media players.

A stack can be implemented by means of Array and Linked List. Stack can
either be a fixed size one or it may have a sense of dynamic resizing. Here, we
are going to implement stack using arrays, which makes it a fixed size stack
implementation.
We can also implement stack dynamically by making use of pointers.
Similarly for queue also…

You might also like