0% found this document useful (0 votes)
33 views242 pages

ALGORITHM FULL COURSE

The document is a comprehensive guide on data structures and algorithms, covering fundamental concepts, recursion, linked lists, stacks, queues, binary trees, graphs, and searching and sorting techniques. It emphasizes the relationship between data structures and algorithms, the importance of selecting appropriate data structures for specific operations, and includes practical examples and exercises. The content is structured into chapters, each focusing on different aspects of data structures and their applications in programming.

Uploaded by

Walida BOUSSOUF
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views242 pages

ALGORITHM FULL COURSE

The document is a comprehensive guide on data structures and algorithms, covering fundamental concepts, recursion, linked lists, stacks, queues, binary trees, graphs, and searching and sorting techniques. It emphasizes the relationship between data structures and algorithms, the importance of selecting appropriate data structures for specific operations, and includes practical examples and exercises. The content is structured into chapters, each focusing on different aspects of data structures and their applications in programming.

Uploaded by

Walida BOUSSOUF
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 242

by

Dr. Himanshu Pandey


Assistant Professor
Department of Computer Science and Engineering

FACULTY OF ENGINEERING & TECHNOLOGY


University of Lucknow
LUCKNOW, UTTAR PRADESH

2019-2020
CONTENTS
CHAPTER 1 BASIC CONCEPTS

1.1 Introduction to Data Structures


1.2 Data structures: organizations of data
1.3. Abstract Data Type (ADT)
1.4. Selecting a data structure to match the operation
1.5. Algorithm
1.6. Practical Algorithm design issues
1.7. Performance of a program
1.8. Classification of Algorithms
1.9. Complexity of Algorithms

CHAPTER 2 RECURSION

2.1. Introduction to Recursion


2.2. Differences between recursion and iteration
2.3. Factorial of a given number
2.4. The Towers of Hanoi
2.5. Fibonacci Sequence Problem
2.6. Program using recursion to calculate the NCR of a given number
2.7. Program to calculate the least common multiple of a given number
2.8. Program to calculate the greatest common divisor
Exercises
Multiple Choice Questions

CHAPTER 3 LINKED LISTS

3.1. Linked List Concepts


3.2. Types of Linked Lists
3.3. Single Linked List
3.3.1. Source Code for the Implementation of Single Linked List
3.4. Using a header node
3.5. Array based linked lists
3.6. Double Linked List
3.6.1. A Complete Source Code for the Implementation of Double Linked List
3.7. Circular Single Linked List
3.7.1. Source Code for Circular Single Linked List
3.8. Circular Double Linked List
3.8.1. Source Code for Circular Double Linked List
3.9. Comparison of Linked List Variations
Exercise
Multiple Choice Questions

I
CHAPTER 4 STACK AND QUEUE

4.1. Stack
4.1.1. Representation of Stack
4.1.2. Program to demonstrate a stack, using array
4.1.3. Program to demonstrate a stack, using linked list
4.2. Algebraic Expressions
4.3. Converting expressions using Stack
4.3.1. Conversion from infix to postfix
4.3.2. Program to convert an infix to postfix expression
4.3.3. Conversion from infix to prefix
4.3.4. Program to convert an infix to prefix expression
4.3.5. Conversion from postfix to infix
4.3.6. Program to convert postfix to infix expression
4.3.7. Conversion from postfix to prefix
4.3.8. Program to convert postfix to prefix expression
4.3.9. Conversion from prefix to infix
4.3.10. Program to convert prefix to infix expression
4.3.11. Conversion from prefix to postfix
4.3.12. Program to convert prefix to postfix expression
4.4. Evaluation of postfix expression
4.5. Applications of stacks
4.6. Queue
4.6.1. Representation of Queue
4.6.2. Program to demonstrate a Queue using array
4.6.3. Program to demonstrate a Queue using linked list
4.7. Applications of Queue
4.8. Circular Queue
4.8.1. Representation of Circular Queue
4.9. Deque
4.10. Priority Queue
Exercises
Multiple Choice Questions

CHAPTER 5 BINARY TREES

5.1. Trees
5.2. Binary Tree
5.3. Binary Tree Traversal Techniques
5.3.1. Recursive Traversal Algorithms
5.3.2. Building Binary Tree from Traversal Pairs
5.3.3. Binary Tree Creation and Traversal Using Arrays
5.3.4. Binary Tree Creation and Traversal Using Pointers
5.3.5. Non Recursive Traversal Algorithms
5.4. Expression Trees
5.4.1. Converting expressions with expression trees
5.5. Threaded Binary Tree
5.6. Binary Search Tree
5.7. AVL Tree

5.8. Search and Traversal Techniques for m-ary trees


5.8.1. Depth first search
5.8.2. Breadth first search
5.9. Sparse Matrices
Exercises
Multiple Choice Questions

II
CHAPTER 6 GRAPHS

6.1. Introduction to Graphs


6.2. Representation of Graphs
6.3. Minimum Spanning Tree

6.
6.4. Reachability Matrix
6.5. Traversing a Graph
6.5.1. Breadth first search and traversal
6.5.2. Depth first search and traversal
Exercises
Multiple Choice Questions

CHAPTER 7 SEARCHING AND SORTING

7.1. Linear Search


7.1.1. A non-recursive program for Linear Search
7.1.1. A Recursive program for linear search
7.2. Binary Search
7.1.2. A non-recursive program for binary search
7.1.3. A recursive program for binary search
7.3. Bubble Sort
7.3.1. Program for Bubble Sort
7.4. Selection Sort
7.4.1 Non-recursive Program for selection sort
7.4.2. Recursive Program for selection sort
7.5. Quick Sort
7.5.1. Recursive program for Quick Sort
Exercises
Multiple Choice Questions

III
Chapter
1
Basic Concepts

The term data structure is used to describe the way data is stored, and the term
algorithm is used to describe the way data is processed. Data structures and
algorithms are interrelated. Choosing a data structure affects the kind of algorithm
you might use, and choosing an algorithm affects the data structures we use.

An Algorithm is a finite sequence of instructions, each of which has a clear meaning


and can be performed with a finite amount of effort in a finite length of time. No
matter what the input values may be, an algorithm terminates after executing a
finite number of instructions.

1.1. Introduction to Data Structures:

Data structure is a representation of logical relationship existing between individual elements of


data. In other words, a data structure defines a way of organizing all data items that considers
not only the elements stored but also their relationship to each other. The term data structure
is used to describe the way data is stored.

To develop a program of an algorithm we should select an appropriate data structure for that
algorithm. Therefore, data structure is represented as:

Algorithm + Data structure = Program

A data structure is said to be linear if its elements form a sequence or a linear list. The linear
data structures like an array, stacks, queues and linked lists organize data in linear order. A
data structure is said to be non linear if its elements form a hierarchical classification where,
data items appear at various levels.

Trees and Graphs are widely used non-linear data structures. Tree and graph structures
represents hierarchial relationship between individual data elements. Graphs are nothing but
trees with certain restrictions removed.

Data structures are divided into two types:

Primitive data structures.


Non-primitive data structures.

Primitive Data Structures are the basic data structures that directly operate upon the
machine instructions. They have different representations on different computers. Integers,
floating point numbers, character constants, string constants and pointers come under this
category.

Non-primitive data structures are more complicated data structures and are derived from
primitive data structures. They emphasize on grouping same or different data items with
relationship between each data item. Arrays, lists and files come under this category. Figure
1.1 shows the classification of data structures.
Data Structures

Primitive Data Structures

Integer Float Char Pointers Arrays Lists Files

Linear Lists Non- Linear Lists

Stacks Queues Graphs

Figure Data Structures

1.2. Data structures: Organization of data

The collection of data you work with in a program have some kind of structure or organization.
No matte how complex your data structures are they can be broken down into two fundamental
types:
Contiguous
Non-Contiguous.

In contiguous structures, terms of data are kept together in memory (either RAM or in a file).
An array is an example of a contiguous structure. Since each element in the array is located
next to one or two other elements. In contrast, items in a non-contiguous structure and
scattered in memory, but we linked to each other in some way. A linked list is an example of a
non-contiguous data structure. Here, the nodes of the list are linked together using pointers
stored in each node. Figure 1.2 below illustrates the difference between contiguous and non-
contiguous structures.

1 2 3 1 2 3

(b) non-contiguous

Figure 1.2 Contiguous and Non-contiguous structures compared

Contiguous structures:

Contiguous structures can be broken drawn further into two kinds: those that contain data
items of all the same size, and those where the size may differ. Figure 1.2 shows example of
each kind. The first kind is called the array. Figure 1.3(a) shows an example of an array of
numbers. In an array, each element is of the same type, and thus has the same size.

The second kind of contiguous structure is called structure, figure 1.3(b) shows a simple

types and thus may have different sizes.


of memory. But his or her name, represented as a string of characters, may require many
bytes and may even be of varying length.

Couples with the atomic types (that is, the single data-item built-in types such as integer, float
built more exotic form of
data structure, including the non-contiguous forms.

struct cust_data
{
int age;
1 2
};

(a) Array

21
(b) struct

Figure 1.3 Examples of contiguous structures.

Non-contiguous structures:

Non-contiguous structures are implemented as a collection of data-items, called nodes, where


each node can point to one or more other nodes in the collection. The simplest kind of non-
contiguous structure is linked list.

A linked list represents a linear, one-dimension type of non-contiguous structure, where there
is only the notation of backwards and forwards. A tree such as shown in figure 1.4(b) is an
example of a two-dimensional non-contiguous structure. Here, there is the notion of up and
down and left and right.

In a tree each node has only one link that leads into the node and links can only go down the
tree. The most general type of non-contiguous structure, called a graph has no such
restrictions. Figure 1.4(c) is an example of a graph.

A B C A

B C
(a) Linked List

D
A
E G
B C
(b) Tree F

D E F G

Figure 1.4. Examples of non-contiguous structures


Hybrid structures:

If two basic types of structures are mixed then it is a hybrid form. Then one part contiguous
and another part non-contiguous. For example, figure 1.5 shows how to implement a double
linked list using three parallel arrays, possibly stored a past from each other in memory.

A B C (a) Conceptual Structure

D P N

1 A 3 4

2 B 4 0

3 C 0 1

4 D 1 2
List Head

Figure 1.5. A double linked list via a hybrid data structure

The array D contains the data for the list, whereas the array P and N hold the previous and

instance, D[i] holds the data for node i and p[i] holds the index to the node previous to i, where
may or may not reside at position i 1. Like wise, N[i] holds the index to the next node in the
list.

1.3. Abstract Data Type (ADT):

The design of a data structure involves more than just its organization. You also need to plan
for the way the data will be accessed and processed that is, how the data will be interpreted
actually, non-contiguous structures including lists, tree and graphs can be implemented
either contiguously or non- contiguously like wise, the structures that are normally treated as
contiguously - arrays and structures can also be implemented non-contiguously.

The notion of a data structure in the abstract needs to be treated differently from what ever is
used to implement the structure. The abstract notion of a data structure is defined in terms of
the operations we plan to perform on the data.

Considering both the organization of data and the expected operations on the data, leads to the
notion of an abstract data type. An abstract data type in a theoretical construct that consists of
data as well as the operations to be performed on the data while hiding implementation.

For example, a stack is a typical abstract data type. Items stored in a stack can only be added
and removed in certain order the last item added is the first item removed. We call these

on the stack, or how the items are pushed and popped. We have only specified the valid
operations that can be performed.

For example, if we want to read a file, we wrote the code to read the physical file device. That
is, we may have to write the same code over and over again. So we created what is known
today as an ADT. We wrote the code to read a file and placed it in a library for a programmer to
use.

As another example, the code to read from a keyboard is an ADT. It has a data structure,
character and set of operations that can be used to read that data structure.

To be made useful, an abstract data type (such as stack) has to be implemented and this is
where data structure comes into ply. For instance, we might choose the simple data structure
of an array to represent the stack, and then define the appropriate indexing operations to
perform pushing and popping.

1.4. Selecting a data structure to match the operation:

The most important process in designing a problem involves choosing which data structure to
use. The choice depends greatly on the type of operations you wish to perform.

Suppose we have an application that uses a sequence of objects, where one of the main
operations is delete an object from the middle of the sequence. The code for this is as follows:

void delete (int *seg, int &n, int posn)


// delete the item at position from an array of n elements.
{
if (n)
{
int i=posn;
n--;
while (i < n)
{
seq[i] = seg[i+1];
i++;
}
}
return;
}

This function shifts towards the front all elements that follow the element at position posn. This
shifting involves data movement that, for integer elements, which is too costly. However,
suppose the array stores larger objects, and lots of them. In this case, the overhead for moving
data becomes high. The problem is that, in a contiguous structure, such as an array the logical
ordering (the ordering that we wish to interpret our elements to have) is the same as the
physical ordering (the ordering that the elements actually have in memory).

If we choose non-contiguous representation, however we can separate the logical ordering from
the physical ordering and thus change one without affecting the other. For example, if we store
our collection of elements using a double linked list (with previous and next pointers), we can
do the deletion without moving the elements, instead, we just modify the pointers in each
node. The code using double linked list is as follows:

void delete (node * beg, int posn)


//delete the item at posn from a list of elements.
{
int i = posn;
node *q = beg;
while (i && q)
{
i--;
q=q next;
}

if (q)
{/* not at end of list, so detach P by making previous and
next nodes point to each other */
node *p = q -> prev;
node *n = q -> next;
if (p)
p -> next = n;
if (n)
n -> prev = P;
}
return;
}

The process of detecting a node from a list is independent of the type of data stored in the
node, and can be accomplished with some pointer manipulation as illustrated in figure below:

A X C

200
Initial List

A X

Figure 1.6 Detaching a node from a list

Since very little data is moved during this process, the deletion using linked lists will often be
faster than when arrays are used.

It may seem that linked lists are superior to arrays. But is that always true? There are trade
offs. Our linked lists yield faster deletions, but they take up more space because they require
two extra pointers per element.

1.5. Algorithm

An algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input
values may be, an algorithm terminates after executing a finite number of instructions. In
addition every algorithm must satisfy the following criteria:

Input: there are zero or more quantities, which are externally supplied;

Output: at least one quantity is produced;


Definiteness: each instruction must be clear and unambiguous;

Finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm will
terminate after a finite number of steps;

Effectiveness: every instruction must be sufficiently basic that it can in principle be carried out
by a person using only pencil and paper. It is not enough that each operation be definite, but it
must also be feasible.

In formal computer science, one distinguishes between an algorithm, and a program. A


program does not necessarily satisfy the fourth condition. One important example of such a
program for a computer is its operating system, which never terminates (except for system
crashes) but continues in a wait loop until more jobs are entered.

We represent an algorithm using pseudo language that is a combination of the constructs of a


programming language together with informal English statements.

1.6. Practical Algorithm design issues:

Choosing an efficient algorithm or data structure is just one part of the design process. Next,
will look at some design issues that are broader in scope. There are three basic design goals
that we should strive for in a program:

1. Try to save time (Time complexity).


2. Try to save space (Space complexity).
3. Try to have face.

A program that runs faster is a better program, so saving time is an obvious goal. Like wise, a

preventing the program from locking up or generating reams of garbled data.

1.7. Performance of a program:

The performance of a program is the amount of computer memory and time needed to run a
program. We use two approaches to determine the performance of a program. One is
analytical, and the other experimental. In performance analysis we use analytical methods,
while in performance measurement we conduct experiments.

Time Complexity:

The time needed by an algorithm expressed as a function of the size of a problem is called the
TIME COMPLEXITY of the algorithm. The time complexity of a program is the amount of
computer time it needs to run to completion.

The limiting behavior of the complexity as size increases is called the asymptotic time
complexity. It is the asymptotic complexity of an algorithm, which ultimately determines the
size of problems that can be solved by the algorithm.

Space Complexity:

The space complexity of a program is the amount of memory it needs to run to completion. The
space need by a program has the following components:
Instruction space: Instruction space is the space needed to store the compiled version of the
program instructions.

Data space: Data space is the space needed to store all constant and variable values. Data
space has two components:

Space needed by constants and simple variables in program.


Space needed by dynamically allocated objects such as arrays and class instances.

Environment stack space: The environment stack is used to save information needed to
resume execution of partially completed functions.

Instruction Space: The amount of instructions space that is needed depends on factors such
as:
The compiler used to complete the program into machine code.
The compiler options in effect at the time of compilation
The target computer.

1.8. Classification of Algorithms

to be sorted or searched or the number of nodes in a graph etc.

1 Next instructions of most programs are executed once or at most only a few times.
If all the instructions of a program have this property, we say that its running time
is a constant.

Log n When the running time of a program is logarithmic, the program gets slightly
slower as n grows. This running time commonly occurs in programs that solve a big
problem by transforming it into a smaller problem, cutting the size by some
constant fraction., When n is a million, log n is a doubled whenever n doubles, log
n increases by a constant, but log n does not double until n increases to n2.

n When the running time of a program is linear, it is generally the case that a small
amount of processing is done on each input element. This is the optimal situation
for an algorithm that must process n inputs.

n. log n This running time arises for algorithms but solve a problem by breaking it up into
smaller sub-problems, solving them independently, and then combining the
solutions. When n doubles, the running time more than doubles.

n2 When the running time of an algorithm is quadratic, it is practical for use only on
relatively small problems. Quadratic running times typically arise in algorithms that
process all pairs of data items (perhaps in a double nested loop) whenever n
doubles, the running time increases four fold.

n3 Similarly, an algorithm that process triples of data items (perhaps in a triple


nested loop) has a cubic running time and is practical for use only on small
problems. Whenever n doubles, the running time increases eight fold.

2n Few algorithms with exponential running time are likely to be appropriate for

problems. Whenever n doubles, the running time squares.


1.9. Complexity of Algorithms

The complexity of an algorithm M is the function f(n) which gives the running time and/or
storage space requirement of the algorithm in terms of the size

shall refer to the running time of the algorithm.

The function f(n), gives the running time of an algorithm, depends not only o
the input data but also on the particular data. The complexity function f(n) for certain cases
are:

Best Case : The minimum possible value of f(n) is called the best case.
Average Case : The expected value of f(n).
Worst Case : The maximum value of f(n) for any key possible input.

The field of computer science, which studies efficiency of algorithms, is known as analysis of
algorithms.

Algorithms can be evaluated by a variety of criteria. Most often we shall be interested in the
rate of growth of the time or space required to solve larger and larger instances of a problem.
We will associate with the problem an integer, called the size of the problem, which is a
measure of the quantity of input data.
Chapter
2
Recursion
Recursion is deceptively simple in statement but exceptionally
complicated in implementation. Recursive procedures work fine in many
problems. Many programmers prefer recursion through simpler
alternatives are available. It is because recursion is elegant to use
through it is costly in terms of time and space. But using it is one thing
and getting involved with it is another.

who not only


loves it but also wants to understand it! With a bit of involvement it is
going to be an interesting reading for you.

2.1. Introduction to Recursion:

A function is recursive if a statement in the body of the function calls itself. Recursion is
the process of defining something in terms of itself. For a computer language to be
recursive, a function must be able to call itself.

For example, let us consider the function factr() shown below, which computers the
factorial of an integer.

#include <stdio.h>
int factorial (int);
main()
{
int num, fact;

fact = factorial (num);


printf ("\n Factorial of %d =%5d\n", num, fact);
}

int factorial (int n)


{
int result;
if (n == 0)
return (1);
else
result = n * factorial (n-1);

return (result);
}

A non-recursive or iterative version for finding the factorial is as follows:

factorial (int n)
{
int i, result = 1;
if (n == 0)
return (result);
else
{
for (i=1; i<=n; i++)
result = result * i;
}
return (result);
}

The operation of the non-recursive version is clear as it uses a loop starting at 1 and
ending at the target value and progressively multiplies each number by the moving
product.

When a function calls itself, new local variables and parameters are allocated storage
on the stack and the function code is executed with these new variables from the start.
A recursive call does not make a new copy of the function. Only the arguments and
variables are new. As each recursive call returns, the old local variables and parameters
are removed from the stack and execution resumes at the point of the function call
inside the function.

When writing recursive functions, you must have a exit condition somewhere to force
the function to return without the recursive call being executed. If you do not have an
exit condition, the recursive function will recurse forever until you run out of stack
space and indicate error about lack of memory, or stack overflow.

2.2. Differences between recursion and iteration:

Both involve repetition.


Both involve a termination test.
Both can occur infinitely.

Iteration Recursion
Iteration explicitly user a repetition Recursion achieves repetition through
structure. repeated function calls.
Iteration terminates when the loop Recursion terminates when a base case
continuation. is recognized.
Iteration keeps modifying the counter Recursion keeps producing simple
until the loop continuation condition versions of the original problem until
fails. the base case is reached.
Iteration normally occurs within a loop Recursion causes another copy of the
so the extra memory assigned is function and hence a considerable
omitted.

time. time.

2.3. Factorial of a given number:

The operation of recursive factorial function is as follows:


Start out with some natural number N (in our example, 5). The recursive definition is:

n = 0, 0 ! = 1 Base Case
n > 0, n ! = n * (n - 1) ! Recursive Case
Recursion Factorials:

5! =5 * 4! = 5 *___ = ____ factr(5) = 5 * factr(4) = __


4! = 4 *3! = 4 *___ = ___ factr(4) = 4 * factr(3) = __
3! = 3 * 2! = 3 * ___ = ___ factr(3) = 3 * factr(2) = __
2! = 2 * 1! = 2 * ___ = ___ factr(2) = 2 * factr(1) = __
1! = 1 * 0! = 1 * __ = __ factr(1) = 1 * factr(0) = __
0! = 1 factr(0) = __

5! = 5*4! = 5*4*3! = 5*4*3*2! = 5*4*3*2*1! = 5*4*3*2*1*0! = 5*4*3*2*1*1


=120

We define 0! to equal 1, and we define factorial N (where N > 0), to be N * factorial (N-
1). All recursive functions must have an exit condition, that is a state when it does not
recurse upon itself. Our exit condition in this example is when N = 0.

Tracing of the flow of the factorial () function:

When the factorial function is first called with, say, N = 5, here is what happens:

FUNCTION:
Does N = 0? No
Function Return Value = 5 * factorial (4)

At this time, the function factorial is called again, with N = 4.

FUNCTION:
Does N = 0? No
Function Return Value = 4 * factorial (3)

At this time, the function factorial is called again, with N = 3.

FUNCTION:
Does N = 0? No
Function Return Value = 3 * factorial (2)

At this time, the function factorial is called again, with N = 2.

FUNCTION:
Does N = 0? No
Function Return Value = 2 * factorial (1)

At this time, the function factorial is called again, with N = 1.

FUNCTION:
Does N = 0? No
Function Return Value = 1 * factorial (0)

At this time, the function factorial is called again, with N = 0.

FUNCTION:
Does N = 0? Yes
Function Return Value = 1
Now, we have to trace our way back up! See, the factorial function was called six times.
At any function level call, all function level calls above still exist! So, when we have N =
2, the function instances where N = 3, 4, and 5 are still waiting for their return values.

So, the function call where N = 1 gets retraced first, once the final call returns 0. So,
the function call where N = 1 returns 1*1, or 1. The next higher function call, where N
= 2, returns 2 * 1 (1, because that's what the function call where N = 1 returned). You
just keep working up the chain.

When N = 2, 2 * 1, or 2 was returned.


When N = 3, 3 * 2, or 6 was returned.
When N = 4, 4 * 6, or 24 was returned.
When N = 5, 5 * 24, or 120 was returned.

And since N = 5 was the first function call (hence the last one to be recalled), the
value 120 is returned.

2.4. The Towers of Hanoi:

In the game of Towers of Hanoi, there are three towers labeled 1, 2, and 3. The game
starts with n disks on tower A. For simplicity, let n is 3. The disks are numbered from 1
to 3, and without loss of generality we may assume that the diameter of each disk is
the same as its number. That is, disk 1 has diameter 1 (in some unit of measure), disk
2 has diameter 2, and disk 3 has diameter 3. All three disks start on tower A in the
order 1, 2, 3. The objective of the game is to move all the disks in tower 1 to entire
tower 3 using tower 2. That is, at no time can a larger disk be placed on a smaller disk.

Figure 3.11.1, illustrates the initial setup of towers of Hanoi. The figure 3.11.2,
illustrates the final setup of towers of Hanoi.

The rules to be followed in moving the disks from tower 1 tower 3 using tower 2 are as
follows:

Only one disk can be moved at a time.


Only the top disc on any tower can be moved to any other tower.
A larger disk cannot be placed on a smaller disk.

T o w er 1 T o w er 2 T o w er 3

Fig. 3. 11. 1. Initial setup of Towers of Hanoi


T o w er 1 T o w er 2 T o w er 3

Fig 3. 11. 2. Final set up of Towers of Hanoi

The towers of Hanoi problem can be easily implemented using recursion. To move the
largest disk to the bottom of tower 3, we move the remaining n 1 disks to tower 2
and then move the largest disk to tower 3. Now we have the remaining n 1 disks to
be moved to tower 3. This can be achieved by using the remaining two towers. We can
also use tower 3 to place any disk on it, since the disk placed on tower 3 is the largest
disk and continue the same operation to place the entire disks in tower 3 in order.

The program that uses recursion to produce a list of moves that shows how to
accomplish the task of transferring the n disks from tower 1 to tower 3 is as follows:

#include <stdio.h>
#include <conio.h>

void towers_of_hanoi (int n, char *a, char *b, char *c);

int cnt=0;

int main (void)


{
int n;
printf("Enter number of discs: ");
scanf("%d",&n);
towers_of_hanoi (n, "Tower 1", "Tower 2", "Tower 3");
getch();
}

void towers_of_hanoi (int n, char *a, char *b, char *c)


{
if (n == 1)
{
++cnt;
printf ("\n%5d: Move disk 1 from %s to %s", cnt, a, c);
return;
}
else
{
towers_of_hanoi (n-1, a, c, b);
++cnt;
printf ("\n%5d: Move disk %d from %s to %s", cnt, n, a, c);
towers_of_hanoi (n-1, b, a, c); return;

}
}
Output of the program:

RUN 1:

Enter the number of discs: 3

1: Move disk 1 from tower 1 to tower 3.


2: Move disk 2 from tower 1 to tower 2.
3: Move disk 1 from tower 3 to tower 2.
4: Move disk 3 from tower 1 to tower 3.
5: Move disk 1 from tower 2 to tower 1.
6: Move disk 2 from tower 2 to tower 3.
7: Move disk 1 from tower 1 to tower 3.

RUN 2:

Enter the number of discs: 4

1: Move disk 1 from tower 1 to tower 2.


2: Move disk 2 from tower 1 to tower 3.
3: Move disk 1 from tower 2 to tower 3.
4: Move disk 3 from tower 1 to tower 2.
5: Move disk 1 from tower 3 to tower 1.
6: Move disk 2 from tower 3 to tower 2.
7: Move disk 1 from tower 1 to tower 2.
8: Move disk 4 from tower 1 to tower 3.
9: Move disk 1 from tower 2 to tower 3.
10: Move disk 2 from tower 2 to tower 1.
11: Move disk 1 from tower 3 to tower 1.
12: Move disk 3 from tower 2 to tower 3.
13: Move disk 1 from tower 1 to tower 2.
14: Move disk 2 from tower 1 to tower 3.
15: Move disk 1 from tower 2 to tower 3.

2.5. Fibonacci Sequence Problem:

A Fibonacci sequence starts with the integers 0 and 1. Successive elements in this
sequence are obtained by summing the preceding two elements in the sequence. For
example, third number in the sequence is 0 + 1 = 1, fourth number is 1 + 1= 2, fifth
number is 1 + 2 = 3 and so on. The sequence of Fibonacci integers is given below:

0 1 1 2 3 5 8 13 21 . . . . . . . . .
A recursive definition for the Fibonacci sequence of integers may be defined as follows:

Fib (n) = n if n = 0 or n = 1
Fib (n) = fib (n-1) + fib (n-2) for n >=2

We will now use the definition to compute fib(5):

fib(5) = fib(4) + fib(3)

fib(3) + fib(2) + fib(3)

fib(2) + fib(1) + fib(2) + fib(3)

fib(1) + fib(0) + fib(1) + fib(2) + fib(3)

1 + 0 + 1 + fib(1) + fib(0) + fib(3)

1 + 0 + 1 + 1 + 0 + fib(2) + fib(1)

1 + 0 + 1 + 1 + 0 + fib(1) + fib(0) + fib(1)

1+0+1+1+0+1+0+1=5

We see that fib(2) is computed 3 times, and fib(3), 2 times in the above calculations.
We save the values of fib(2) or fib(3) and reuse them whenever needed.

A recursive function to compute the Fibonacci number in the nth position is given below:

main()
{
clrscr ();

fib(n)
int n;
{
int x;
if (n==0 | | n==1)
return n;
x=fib(n-1) + fib(n-2);
return (x);
}

Output:

fib(5) is 5
2.6. Program using recursion to calculate the NCR of a given number:

#include<stdio.h>
float ncr (int n, int r);

void main()
{
int n, r, result;

result = ncr(n, r);

float ncr (int n, int r)


{
if(r == 0)
return 1;
else
return(n * 1.0 / r * ncr (n-1, r-1));

Output:

Enter the value of N and R: 5 2


The NCR value is: 10.00

2.7. Program to calculate the least common multiple of a given number:

#include<stdio.h>

int alone(int a[], int n);


long int lcm(int a[], int n, int prime);

void main()
{
int a[20], status, i, n, prime;
p

for (i = 0; i < n; i++)

int alone (int a[], int n);


{
int k;
for (k = 0; k < n; k++)
if (a[k] != 1)
return 0;
return 1;
}
long int lcm (int a[], int n, int prime)
{
int i, status;
status = 0;
if (allone(a, n))
return 1;
for (i = 0; i < n; i++)
if ((a[i] % prime) == 0)
{
status = 1;
a[i] = a[i] / prime;
}
if (status == 1)
return (prime * lcm(a, n, prime));
else
return (lcm (a, n, prime = (prime == 2) ? prime+1 : prime+2));
}

Output:

Enter the limit: 6


Enter the numbers: 6 5 4 3 2 1
The least common multiple is 60

2.8. Program to calculate the greatest common divisor:

#include<stdio.h>

int check_limit (int a[], int n, int prime);


int check_all (int a[], int n, int prime);
long int gcd (int a[], int n, int prime);

void main()
{
int a[20], stat, i, n, prime;

for (i = 0; i < n; i ++)

int check_limit (int a[], int n, int prime)


{
int i;
for (i = 0; i < n; i++)
if (prime > a[i])
return 1;
return 0;
}
int check_all (int a[], int n, int prime)
{
int i;
for (i = 0; i < n; i++)
if ((a[i] % prime) != 0)
return 0;
for (i = 0; i < n; i++)
a[i] = a[i] / prime;
return 1;
}

long int gcd (int a[], int n, int prime)


{
int i;
if (check_limit(a, n, prime))
return 1;
if (check_all (a, n, prime))
return (prime * gcd (a, n, prime));
else
return (gcd (a, n, prime = (prime == 2) ? prime+1 : prime+2));
}

Output:

Enter the limit: 5


Enter the numbers: 99 55 22 77 121
The greatest common divisor is 11

Exercises

1. What is the importance of the stopping case in recursive functions?

2. Write a function with one positive integer parameter called n. The function will
write 2^n-1 integers (where ^ is the exponentiation operation). Here are the
patterns of output for various values of n:

n=1: Output is: 1


n=2: Output is: 121
n=3: Output is: 1213121
n=4: Output is: 121312141213121

And so on. Note that the output for n always consists of the output for n-1,
followed by n itself, followed by a second copy of the output for n-1.

3. Write a recursive function for the mathematical function:


f(n) = 1 if n = 1
f(n) = 2 * f(n-1) if n >= 2

4. Which method is preferable in general?


a) Recursive method
b) Non-recursive method

5. Write a function using Recursion to print numbers from n to 0.

6. Write a function using Recursion to enter and display a string in reverse and
state whether the string contains any spaces. Don't use arrays/strings.
7. Write a function using Recursion to check if a number n is prime. (You have to
check whether n is divisible by any number below n)

8. Write a function using Recursion to enter characters one by one until a space is
encountered. The function should return the depth at which the space was
encountered.

Multiple Choice Questions

In a single function declaration, what is the maximum number of [ ]


statements that may be recursive calls?
A. 1 B. 2
C. n (where n is the argument) D. There is no fixed maximum

What is the maximum depth of recursive calls a function may make? [ ]


A. 1 B. 2
C. n (where n is the argument) D. There is no fixed maximum

Consider the following function: [ ]


void super_write_vertical (int number)
{
if (number < 0)
{
-
super_write_vertical(abs(number));
}
else if (number < 10)
\
else
{
super_write_vertical(number/10);
\
}
}
What values of number are directly handled by the stopping case?
A. number < 0 B. number < 10
C. number >= 0 && number < 10 D. number > 10

4. Consider the following function: [ ]


void super_write_vertical(int number)
{
if (number < 0)
{
-
super_write_vertical (abs(number));
}
else if (number < 10)
\
else
{
super_write_vertical(number/10);
\
}
}
Which call will result in the most recursive calls?
A. super_write_vertical(-1023) B. super_write_vertical(0)
C. super_write_vertical(100) D. super_write_vertical(1023)
5. Consider this function declaration: [ ]

void quiz (int i)


{
if (i > 1)
{
quiz(i / 2);
quiz(i / 2);
}

How many asterisks are printed by the function call quiz(5)?


A. 3 B. 4
C. 7 D. 8

6. In a real computer, what will happen if you make a recursive call without [ ]
making the problem smaller?
A. The operating system detects the infinite recursion because of the
"repeated state"
B. The program keeps running until you press Ctrl-C
C. The results are non-deterministic
D. The run-time stack overflows, halting the program

7. When the compiler compiles your program, how is a recursive call [ ]


treated differently than a non-recursive function call?
A. Parameters are all treated as reference arguments
B. Parameters are all treated as value arguments C.
There is no duplication of local variables D. None of
the above

8. When a function call is executed, which information is not saved in the [ ]


activation record?
A. Current depth of recursion.
B. Formal parameters.
C. Location where the function should return when done.
D. Local variables

9. What technique is often used to prove the correctness of a recursive [ ]


function?
A. Communitivity. B. Diagonalization.
C. Mathematical induction. D. Matrix Multiplication.
Chapter
3
LINKED LISTS
In this chapter, the list data structure is presented. This structure can be used
as the basis for the implementation of other data structures (stacks, queues
etc.). The basic linked list can be used without modification in many programs.
However, some applications require enhancements to the linked list design.
These enhancements fall into three broad categories and yield variations on
linked lists that can be used in any combination: circular linked lists, double
linked lists and lists with header nodes.

Linked lists and arrays are similar since they both store collections of data. Array is the
most common data structure used to store collections of elements. Arrays are
convenient to declare and provide the easy syntax to access any element by its index
number. Once the array is set up, access to any element is convenient and fast. The
disadvantages of arrays are:

The size of the array is fixed. Most often this size is specified at compile time. This
makes the programmers to allocate arrays, which seems "large enough" than
required.

Inserting new elements at the front is potentially expensive because existing


elements need to be shifted over to make room.

Deleting an element from an array is not possible.

Linked lists have their own strengths and weaknesses, but they happen to be strong
where arrays are weak. Generally array's allocates the memory for all its elements in
one block whereas linked lists use an entirely different strategy. Linked lists allocate
memory for each element separately and only when necessary.

Here is a quick review of the terminology and rules of pointers. The linked list code
will depend on the following functions:

malloc() is a system function which allocates a block of memory in the "heap" and
returns a pointer to the new block. The prototype of malloc() and other heap functions
are in stdlib.h. malloc() returns NULL if it cannot fulfill the request. It is defined by:

void *malloc (number_of_bytes)

Since a void * is returned the C standard states that this pointer can be converted to
any type. For example,
char *cp;
cp = (char *) malloc (100);

Attempts to get 100 bytes and assigns the starting address to cp. We can also use the
sizeof() function to specify the number of bytes. For example,

int *ip;
ip = (int *) malloc (100*sizeof(int));
free() is the opposite of malloc(), which de-allocates memory. The argument to free()
is a pointer to a block of memory in the heap a pointer which was obtained by a
malloc() function. The syntax is:

free (ptr);

The advantage of free() is simply memory management when we no longer need a


block.

3.1. Linked List Concepts:

A linked list is a non-sequential collection of data items. It is a dynamic data structure.


For every data item in a linked list, there is an associated pointer that would give the
memory location of the next data item in the linked list.

The data items in the linked list are not in consecutive memory locations. They may be
anywhere, but the accessing of these data items is easier as each data item contains
the address of the next data item.

Advantages of linked lists:

Linked lists have many advantages. Some of the very important advantages are:

1. Linked lists are dynamic data structures. i.e., they can grow or shrink during
the execution of a program.
2. Linked lists have efficient memory utilization. Here, memory is not pre-
allocated. Memory is allocated whenever it is required and it is de-allocated
(removed) when it is no longer needed.
3. Insertion and Deletions are easier and efficient. Linked lists provide flexibility
in inserting a data item at a specified position and deletion of the data item
from the given position.
4. Many complex applications can be easily carried out with linked lists.

Disadvantages of linked lists:

1. It consumes more space because every node requires a additional pointer to


store address of the next node.
2. Searching a particular element in list is difficult and also time consuming.

3.2. Types of Linked Lists:

Basically we can put linked lists into the following four items:

1. Single Linked List.


2. Double Linked List.
3. Circular Linked List.
4. Circular Double Linked List.

A single linked list is one in which all nodes are linked together in some sequential
manner. Hence, it is also called as linear linked list.
A double linked list is one in which all nodes are linked together by multiple links which
helps in accessing both the successor node (next node) and predecessor node (previous
node) from any arbitrary node within the list. Therefore each node in a double linked
list has two link fields (pointers) to point to the left node (previous) and the right node
(next). This helps to traverse in forward direction and backward direction.

A circular linked list is one, which has no beginning and no end. A single linked list can
be made a circular linked list by simply storing address of the very first node in the link
field of the last node.

A circular double linked list is one, which has both the successor pointer and
predecessor pointer in the circular manner.

Comparison between array and linked list:

ARRAY LINKED LIST

Size of an array is fixed Size of a list is not fixed

Memory is allocated from stack Memory is allocated from heap

It is necessary to specify the number of It is not necessary to specify the


elements during declaration (i.e., during number of elements during declaration
compile time). (i.e., memory is allocated during run
time).
It occupies less memory than a linked It occupies more memory.
list for the same number of elements.
Inserting new elements at the front is Inserting a new element at any position
potentially expensive because existing can be carried out easily.
elements need to be shifted over to
make room.
Deleting an element from an array is Deleting an element is possible.
not possible.

Trade offs between linked lists and arrays:

FEATURE ARRAYS LINKED LISTS

Sequential access efficient efficient

Random access efficient inefficient

Resigning inefficient efficient

Element rearranging inefficient efficient

Overhead per elements none 1 or 2 links


Applications of linked list:

1. Linked lists are used to represent and manipulate polynomial. Polynomials are
expression containing terms with non zero coefficient and exponents. For
example:

P(x) = a0 Xn + a1 Xn-1 n-1 X + an

2. Represent very large numbers and operations of the large number such
as addition, multiplication and division.

3. Linked lists are to implement stack, queue, trees and graphs.

4. Implement the symbol table in compiler construction

3.3. Single Linked List:

A linked list allocates space for each element separately in its own block of memory
called a "node". The list gets an overall structure by using pointers to connect all its
nodes together like the links in a chain. Each node contains two fields; a "data" field to
store whatever element, and a "next" field which is a pointer used to link to the next
node. Each node is allocated in the heap using malloc(), so the node memory continues
to exist until it is explicitly de-allocated using free(). The front of the list is a pointer to
.

A single linked list is shown in figure 3.2.1.

HEAP

10 200 20 300 30 400 40 X


100 200 400
The start
pointer holds
Stores the next
the address the data. node address. The next field of the
of the first last node is NULL.
node of the
list.

Figure 3.2.1. Single Linked List

The beginning of the linked list is stored in a "start" pointer which points to the first
node. The first node contains a pointer to the second node. The second node contains a
pointer to the third node, ... and so on. The last node in the list has its next field set to
NULL to mark the end of the list. Code can access any node in the list by starting at the
start and following the next pointers.

The start pointer is an ordinary local pointer variable, so it is drawn separately on the
left top to show that it is in the stack. The list nodes are drawn on the right to show
that they are allocated in the heap.
Implementation of Single Linked List:

Before writing the code to build the above list, we need to create a start node, used to
create and access other nodes in the linked list. The following structure definition will
do (see figure 3.2.2):

Creating a structure with one data item and a next pointer, which will be pointing
to next node of the list. This is called as self-referential structure.

Initialise the start pointer to be NULL.

struct slinklist
{
int data; node: data next
struct slinklist* next;
};

typedef struct slinklist node; start


Empty list: NULL
node *start = NULL;

Figure 3.2.2. Structure definition, single link node and empty list

The basic operations in a single linked list are:

Creation.
Insertion.
Deletion.
Traversing.

Creating a node for Single Linked List:

Creating a singly linked list starts with creating a node. Sufficient memory has to be
allocated for creating a node. The information is stored in the memory, allocated by
using the malloc() function. The function getnode(), is used for creating a node, after
allocating memory for the structure of type node, the information for the item (i.e.,
data) has to be read from the user, set next field to NULL and finally returns the
address of the node. Figure 3.2.3 illustrates the creation of a node for single linked list.

node* getnode()
{ newnode
node* newnode;
newnode = (node *) malloc(sizeof(node));
10 X
printf("\n Enter data: "); scanf("%d", 100
&newnode -> data);
newnode -> next = NULL;
return newnode;
}

Figure 3.2.3. new node with a value of 10


The following steps are to be followed to create

Get the new node using getnode().


newnode = getnode();

If the list is empty, assign new node as start.


start = newnode;

If the list is not empty, follow the steps given below:

The next field of the new node is made to point the first node (i.e.
start node) in the list by assigning the address of the first node.

The start pointer is made to point the new node by assigning the
address of the new node.

Figure 3.2.4 shows 4 items in a single linked list stored at different locations in
memory.

start
100

10 200 20 300 30 400 40 X


100 200 300 400

Figure 3.2.4. Single Linked List with 4 nodes

vo id createlist(int n)
{
int i;
node * new node;
node *temp;
for(i = 0; i < n ; i+ +)
{
new node = getnode();
if(start = = NULL)
{
start = new node;
}
else
{
temp = start;
while(temp - > next != NULL)
temp = temp - > next;
temp - > next = new node;
}
}
}
Insertion of a Node:

One of the most primitive operations that can be done in a singly linked list is the
insertion of a node. Memory is to be allocated for the new node (in a similar way that is
done while creating a list) before reading the data. The new node will contain empty
data field and empty next field. The data field of the new node is then stored with the
information read from the user. The next field of the new node is assigned to NULL. The
new node can then be inserted at three different places namely:

Inserting a node at the beginning.


Inserting a node at the end.
Inserting a node at intermediate position.

Inserting a node at the beginning:

The following steps are to be followed to insert a new node at the beginning of the list:

Get the new node using getnode().


newnode = getnode();

If the list is empty then start = newnode.

If the list is not empty, follow the steps given below:


newnode -> next = start;
start = newnode;

Figure 3.2.5 shows inserting a node into the single linked list at the beginning.

start
500

10 200 20 300 30 400 40 X


100 200 300 400

5 100
500

Figure 3.2.5. Inserting a node at the beginning

The function insert_at_beg(), is used for inserting a node at the beginning

void insert_at_beg()
{
node *newnode;
newnode = getnode();
if(start == NULL)
{
start = newnode;
}
else
{
newnode -> next = start;
start = newnode;
}
}
Inserting a node at the end:

The following steps are followed to insert a new node at the end of the list:

Get the new node using getnode()


newnode = getnode();

If the list is empty then start = newnode.

If the list is not empty follow the steps given below:


temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;

Figure 3.2.6 shows inserting a node into the single linked list at the end.

start
100

10 200 20 300 30 400 40 500

50

Figure 3.2.6. Inserting a node at the end.

The function insert_at_end(), is used for inserting a node at the end.

void insert_at_end()
{
node *newnode, *temp;
newnode = getnode();
if(start == NULL)
{
start = newnode;
}
else
{
temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;
}
}

Inserting a node at intermediate position:

The following steps are followed, to insert a new node in an intermediate position in the
list:

Get the new node using getnode().


newnode = getnode();
Ensure that the specified position is in between first node and last node. If
not, specified position is invalid. This is done by countnode() function.

Store the starting address (which is in start pointer) in temp and prev
pointers. Then traverse the temp pointer upto the specified position followed
by prev pointer.

After reaching the specified position, follow the steps given below:
prev -> next = newnode;
newnode -> next = temp;

Let the intermediate position be 3.

Figure 3.2.7 shows inserting a node into the single linked list at a specified intermediate
position other than beginning and end.

start temp
100

10 200 20 500 30 400 40 X

50 300
500 new node

Figure 3.2.7. Inserting a node at an intermediate position.

The function insert_at_mid(), is used for inserting a node in the intermediate position.

void insert_at_mid()
{
node *newnode, *temp, *prev;
int pos, nodectr, ctr = 1;
newnode = getnode();
printf("\n Enter the position: ");
scanf("%d", &pos);
nodectr = countnode(start);
if(pos > 1 && pos < nodectr)
{
temp = prev = start;
while(ctr < pos)
{
prev = temp;
temp = temp -> next;
ctr++;
}
prev -> next = newnode;
newnode -> next = temp;
}
else
{
printf("position %d is not a middle position", pos);
}
}
Deletion of a node:

Another primitive operation that can be done in a singly linked list is the deletion of a
node. Memory is to be released for the node to be deleted. A node can be deleted from
the list from three different places namely.

Deleting a node at the beginning.


Deleting a node at the end.
Deleting a node at intermediate position.

Deleting a node at the beginning:

The following steps are followed, to delete a node at the beginning of the list:

If the list is not empty, follow the steps given below:


temp = start;
start = start -> next;
free(temp);

Figure 3.2.8 shows deleting a node at the beginning of a single linked list.

start
200

10 200 20 300 30 400 40 X


200 300 400
temp

Figure 3.2.8. Deleting a node at the beginning.

The function delete_at_beg(), is used for deleting the first node in the list.

void delete_at_beg()
{
node *temp;
if(start == NULL)
{
printf("\n No nodes are exist..");
return ;
}
else
{
temp = start;
start = temp -> next;
free(temp);
printf("\n Node deleted ");
}
}
Deleting a node at the end:

The following steps are followed to delete a node at the end of the list:

If the list is not empty, follow the steps given below:

temp = prev = start;


while(temp -> next != NULL)
{
prev = temp;
temp = temp -> next;
}
prev -> next = NULL;
free(temp);

Figure 3.2.9 shows deleting a node at the end of a single linked list.

start
100

10 200 20 300 30 X 40
100 200 300

Figure 3.2.9. Deleting a node at the end.

The function delete_at_last(), is used for deleting the last node in the list.

void delete_at_last()
{
node *temp, *prev;
if(start == NULL)
{
printf("\n Empty List..");
return ;
}
else
{
temp = start;
prev = start;
while(temp -> next != NULL)
{
prev = temp;
temp = temp -> next;
}
prev -> next = NULL;
free(temp);
printf("\n Node deleted ");
}
}
Deleting a node at Intermediate position:

The following steps are followed, to delete a node from an intermediate position in the
list (List must contain more than two node).

If the list is not empty, follow the steps given below.


if(pos > 1 && pos < nodectr)
{
temp = prev = start;
ctr = 1;
while(ctr < pos)
{
prev = temp;
temp = temp -> next;
ctr++;
}
prev -> next = temp -> next;
free(temp);
printf("\n node deleted..");
}

Figure 3.2.10 shows deleting a node at a specified intermediate position other than
beginning and end from a single linked list.

Start
100

10 30 0 20 30 0 30 40 0 40 X

Figure 3.2.10. Deleting a node at an intermediate position.

The function delete_at_mid(), is used for deleting the intermediate node in the list.

void delete_at_mid()
{
int ctr = 1, pos, nodectr;
node *temp, *prev;
if(start == NULL)
{
printf("\n Empty List..");
return ;
}
else
{
printf("\n Enter position of node to delete: ");
scanf("%d", &pos);
nodectr = countnode(start);
if(pos > nodectr)
{
printf("\nThis node doesnot exist");
}
if(pos > 1 && pos < nodectr)
{
temp = prev = start;
while(ctr < pos)
{
prev = temp;
temp = temp -> next;
ctr ++;
}
prev -> next = temp -> next;
free(temp);
printf("\n Node deleted..");
}
else
{
printf("\n Invalid position..");
getch();
}

}
}

Traversal and displaying a list (Left to Right):

To display the information, you have to traverse (move) a linked list, node by node
from the first node, until the end of the list is reached. Traversing a list involves the
following steps:

Assign the address of start pointer to a temp pointer.


Display the information from the data field of each node.

The function traverse() is used for traversing and displaying the information stored in
the list from left to right.

void traverse()
{
node *temp;
temp = start;
printf("\n The contents of List (Left to Right):
\n"); if(start == NULL )
printf("\n Empty List");
else
{
while (temp != NULL)
{
printf("%d ->", temp -> data);
temp = temp -> next;
}
}
printf("X");
}

Alternatively there is another way to traverse and display the information. That is in
reverse order. The function rev_traverse(), is used for traversing and displaying the
information stored in the list from right to left.
void rev_traverse(no de *st)
{
if(st = = NULL)
{
return;
}
else
{
rev_traverse(st - > next);
printf("%d - >", st - > data);
}
}

Counting the Number of Nodes:

The following code will count the number of nodes exist in the list using recursion.

int countnode(node *st)


{
if(st = = NULL)
return 0;
else
return(1 + countnode(st - > next));
}

3.3.1. Source Code for the Implementation of Single Linked List:

# include <stdio.h>
# include <conio.h>
# include <stdlib.h>

struct slinklist
{
int data;
struct slinklist *next;
};

typedef struct slinklist node;

node *start = NULL;


int menu()
{
int ch;
clrscr();
printf("\n 1.Create a list ");
printf("\n--------------------------");
printf("\n 2.Insert a node at beginning ");
printf("\n 3.Insert a node at end");
printf("\n 4.Insert a node at middle");
printf("\n--------------------------");
printf("\n 5.Delete a node from beginning");
printf("\n 6.Delete a node from Last");
printf("\n 7.Delete a node from Middle");
printf("\n--------------------------");
printf("\n 8.Traverse the list (Left to Right)");
printf("\n 9.Traverse the list (Right to Left)");
printf("\n--------------------------");
printf("\n 10. Count nodes ");
printf("\n 11. Exit ");
printf("\n\n Enter your choice: ");
scanf("%d",&ch);
return ch;
}

node* getnode()
{
node * newnode;
newnode = (node *) malloc(sizeof(node));
printf("\n Enter data: ");
scanf("%d", &newnode -> data);
newnode -> next = NULL;
return newnode;
}

int countnode(node *ptr)


{
int count=0;
while(ptr != NULL)
{
count++;
ptr = ptr -> next;
}
return (count);
}

void createlist(int n)
{
int i;
node *newnode;
node *temp;
for(i = 0; i < n; i++)
{
newnode = getnode();
if(start == NULL)
{
start = newnode;
}
else
{
temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;
}
}
}

void traverse()
{
node *temp;
temp = start;
printf("\n The contents of List (Left to Right): \n");
if(start == NULL)
{
printf("\n Empty List");
return;
}
else
{
while(temp != NULL)
{
printf("%d-->", temp -> data);
temp = temp -> next;
}
}
printf(" X ");
}

void rev_traverse(node *start)


{
if(start == NULL)
{
return;
}
else
{
rev_traverse(start -> next);
printf("%d -->", start -> data);
}
}

void insert_at_beg()
{
node *newnode;
newnode = getnode();
if(start == NULL)
{
start = newnode;
}
else
{
newnode -> next = start;
start = newnode;
}
}

void insert_at_end()
{
node *newnode, *temp;
newnode = getnode();
if(start == NULL)
{
start = newnode;
}
else
{
temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;
}
}

void insert_at_mid()
{
node *newnode, *temp, *prev;
int pos, nodectr, ctr = 1;
newnode = getnode();
printf("\n Enter the position: ");
scanf("%d", &pos);
nodectr = countnode(start);
if(pos > 1 && pos < nodectr)
{
temp = prev = start;
while(ctr < pos)
{
prev = temp;
temp = temp -> next;
ctr++;
}
prev -> next = newnode;
newnode -> next = temp;
}
else
printf("position %d is not a middle position", pos);
}

void delete_at_beg()
{
node *temp;
if(start == NULL)
{
printf("\n No nodes are exist..");
return ;
}
else
{
temp = start;
start = temp -> next;
free(temp);
printf("\n Node deleted ");
}
}

void delete_at_last()
{
node *temp, *prev;
if(start == NULL)
{
printf("\n Empty List..");
return ;
}
else
{
temp = start;
prev = start;
while(temp -> next != NULL)
{
prev = temp;
temp = temp -> next;
}
prev -> next = NULL;
free(temp);
printf("\n Node deleted ");
}
}

void delete_at_mid()
{
int ctr = 1, pos, nodectr;
node *temp, *prev;
if(start == NULL)
{
printf("\n Empty List..");
return ;
}
else
{
printf("\n Enter position of node to delete: ");
scanf("%d", &pos);
nodectr = countnode(start);
if(pos > nodectr)
{
printf("\nThis node doesnot exist");

}
if(pos > 1 && pos < nodectr)
{
temp = prev = start;
while(ctr < pos)
{
prev = temp;
temp = temp -> next;
ctr ++;
}
prev -> next = temp -> next;
free(temp);
printf("\n Node deleted..");
}
else
{
printf("\n Invalid position..");
getch();
}
}
}

void main(void)
{
int ch, n;
clrscr();
while(1)
{
ch = menu();
switch(ch)
{
case 1:
if(start == NULL)
{
printf("\n Number of nodes you want to create: ");
scanf("%d", &n);
createlist(n);
printf("\n List created..");
}
else
printf("\n List is already created..");
break;
case 2:
insert_at_beg();
break;
case 3:
insert_at_end();
break;
case 4:
insert_at_mid();
break;
case 5:
delete_at_beg();
break;
case 6:
delete_at_last();
break;
case 7:
delete_at_mid();
break;
case 8:
traverse();
break;
case 9:
printf("\n The contents of List (Right to Left): \n");
rev_traverse(start);
printf(" X ");
break;
case 10:
printf("\n No of nodes : %d ", countnode(start));
break;
case 11 :
exit(0);
}
getch();
}
}

3.4. Using a header node:

A header node is a special dummy node found at the front of the list. The use of header
node is an alternative to remove the first node in a list. For example, the picture below
shows how the list with data 10, 20 and 30 would be represented using a linked list
without and with a header node:

100

10 200 20 300 30 X
100 200 300

Single Linke d List w it ho ut a he a der no de

sta rt
400

100 10 200 20 300 30 X


400 100 200 300

Single Linked List with header node

Note that if your linked lists do include a header node, there is no need for the special
case code given above for the remove operation; node n can never be the first node in
the list, so there is no need to check for that case. Similarly, having a header node can
simplify the code that adds a node before a given node n.
Note that if you do decide to use a header node, you must remember to initialize an
empty list to contain one (dummy) node, you must remember not to include the header
node in the count of "real" nodes in the list.

It is also useful when information other than that found in each node of the list is
needed. For example, imagine an application in which the number of items in a list is
often calculated. In a standard linked list, the list function to count the number of
nodes has to traverse the entire list every time. However, if the current length is
maintained in a header node, that information can be obtained very quickly.

3.5. Array based linked lists:

Another alternative is to allocate the nodes in blocks. In fact, if you know the maximum
size of a list a head of time, you can pre-allocate the nodes in a single array. The result
is a hybrid structure an array based linked list. Figure 3.5.1 shows an example of null
terminated single linked list where all the nodes are allocated contiguously in an array.

start
100 a
b
a 200 b 300 C X
100 200 c

Conceptual structure d

Implementation

Figure 3.5.1. An array based linked list

3.6. Double Linked List:

A double linked list is a two-way list in which all nodes will have two links. This helps in
accessing both successor node and predecessor node from the given node position. It
provides bi-directional traversing. Each node contains three fields:

Left link.
Data.
Right link.

The left link points to the predecessor node and the right link points to the successor
node. The data field stores the required data.

Many applications require searching forward and backward thru nodes of a list.
For example searching for a name in a telephone directory would need forward
and backward scanning thru a region of the whole list.

The basic operations in a double linked list are:

Creation.
Insertion.
Deletion.
Traversing.
A double linked list is shown in figure 3.3.1.

Stores the previous


node address.

start
X 200 100 20 300 200 30 X
100 200
The start
pointer holds
the address Stores the next
of the first last node is NULL.
node of the
list.

Figure 3.3.1. Double Linked List

The beginning of the double linked list is stored in a "start" pointer which points to the
set to NULL.

The following code gives the structure definition:

struct dlinklist
{ node: left data right
struct dlinklist *left;
int data;
struct dlinklist *right;

}; start

typedef struct dlinklist node; Empty list: NULL

node *start = NULL;


Figure 3.4.1. Structure definition, double link node and empty list

Creating a node for Double Linked List:

Creating a double linked list starts with creating a node. Sufficient memory has to be
allocated for creating a node. The information is stored in the memory, allocated by
using the malloc() function. The function getnode(), is used for creating a node, after
allocating memory for the structure of type node, the information for the item (i.e.,
data) has to be read from the user and set left field to NULL and right field also set to
NULL (see figure 3.2.2).

node* getnode()
{
node* newnode;
newnode = (node *) malloc(sizeof(node));
printf("\n Enter data: "); X 10 X
scanf("%d", &newnode -> data);
newnode -> left = NULL;
newnode -> right = NULL;
return newnode;
}
The following

Get the new node using getnode().

newnode =getnode();

If the list is empty then start = newnode.

If the list is not empty, follow the steps given below:

The left field of the new node is made to point the previous node.

The previous nodes right field must be assigned with address of the
new node.

void createlist(int n)
{
int i;
node * new node;
node *tem p;
for(i = 0; i < n; i+ +)
{
new node = getnode();
if(start = = NULL)
{
start = new node;
}
else
{
temp = start;
while(temp - > right)
temp = temp - > right;
tem p - > right = new no de;
new node - > left = temp;
}
}
}

Figure 3.4.3 shows 3 items in a double linked list stored at different locations.

start
100

X 10 200 100 20 300 200 30 X


200

Figure 3.4.3. Double Linked List with 3 nodes


Inserting a node at the beginning:

The following steps are to be followed to insert a new node at the beginning of the list:

Get the new node using getnode().

newnode=getnode();

If the list is empty then start = newnode.

If the list is not empty, follow the steps given below:

newnode -> right = start;


start -> left = newnode;
start = newnode;

The function dbl_insert_beg(), is used for inserting a node at the beginning. Figure
3.4.4 shows inserting a node into the double linked list at the beginning.

start
400

400 10 200 100 20 300 200 30 X


200

X 40 100

400

Figure 3.4.4. Inserting a node at the beginning

Inserting a node at the end:

The following steps are followed to insert a new node at the end of the list:

Get the new node using getnode()

newnode=getnode();

If the list is empty then start = newnode.

If the list is not empty follow the steps given below:

temp = start;
while(temp -> right != NULL)
temp = temp -> right;
temp -> right = newnode;
newnode -> left = temp;

The function dbl_insert_end(), is used for inserting a node at the end. Figure 3.4.5
shows inserting a node into the double linked list at the end.
start
100

X 10 200 100 20 300 200 30 400

200

300 40 X
400

Figure 3.4.5. Inserting a node at the end

Inserting a node at an intermediate position:

The following steps are followed, to insert a new node in an intermediate position in the
list:

Get the new node using getnode().

newnode=getnode();

Ensure that the specified position is in between first node and last node. If
not, specified position is invalid. This is done by countnode() function.

Store the starting address (which is in start pointer) in temp and prev
pointers. Then traverse the temp pointer upto the specified position followed
by prev pointer.

After reaching the specified position, follow the steps given below:

newnode -> left = temp;


newnode -> right = temp -> right;
temp -> right -> left = newnode;
temp -> right = newnode;

The function dbl_insert_mid(), is used for inserting a node in the intermediate position.
Figure 3.4.6 shows inserting a node into the double linked list at a specified
intermediate position other than beginning and end.

Start
100 40 200
100

400

X 10 400 400 20 300


200
100

200 30
300
Deleting a node at the beginning:

The following steps are followed, to delete a node at the beginning of the list:

If the list is not empty, follow the steps given below:

temp = start;
start = start -> right;
start -> left = NULL;
free(temp);

The function dbl_delete_beg(), is used for deleting the first node in the list. Figure
3.4.6 shows deleting a node at the beginning of a double linked list.

start
200

X 10 200 X 20 300 200 30 X


200

Figure 3.4.6. Deleting a node at beginning

Deleting a node at the end:

The following steps are followed to delete a node at the end of the list:

If the list is not empty, follow the steps given below:

temp = start;
while(temp -> right != NULL)
{
temp = temp -> right;
}
temp -> left -> right = NULL;
free(temp);

The function dbl_delete_last(), is used for deleting the last node in the list. Figure 3.4.7
shows deleting a node at the end of a double linked list.

start
100

X 10 200 100 20 X
200

Figure 3.4.7. Deleting a node at the end


Deleting a node at Intermediate position:

The following steps are followed, to delete a node from an intermediate position in the
list (List must contain more than two nodes).

If the list is not empty, follow the steps given below:

Get the position of the node to delete.

Ensure that the specified position is in between first node and last
node. If not, specified position is invalid.

Then perform the following steps:


if(pos > 1 && pos < nodectr)
{
temp = start;
i = 1;
while(i < pos)
{
temp = temp -> right;
i++;
}
temp -> right -> left = temp -> left;
temp -> left -> right = temp -> right;
free(temp);
printf("\n node deleted..");
}

The function delete_at_mid(), is used for deleting the intermediate node in the list.
Figure 3.4.8 shows deleting a node at a specified intermediate position other than
beginning and end from a double linked list.

Start
100

X 10 300 100 20 300 100 30 X

100 200

Figure 3.4.8 Deleting a node at an intermediate position

Traversal and displaying a list (Left to Right):

To display the information, you have to traverse the list, node by node from the first
node, until the end of the list is reached. The function traverse_left_right() is used for
traversing and displaying the information stored in the list from left to right.

The following steps are followed, to traverse a list from left to right:

If the list is not empty, follow the steps given below:


temp = start;
while(temp != NULL)
{
print temp -> data;
temp = temp -> right;
}

Traversal and displaying a list (Right to Left):

To display the information from right to left, you have to traverse the list, node by node
from the first node, until the end of the list is reached. The function
traverse_right_left() is used for traversing and displaying the information stored in the
list from right to left. The following steps are followed, to traverse a list from right to
left:

If the list is not empty, follow the steps given below:


temp = start;
while(temp -> right != NULL)
temp = temp -> right;
while(temp != NULL)
{
print temp -> data;
temp = temp -> left;
}

Counting the Number of Nodes:

The following code will count the number of nodes exist in the list (using recursion).

int countnode(node *start)


{
if(start = = NULL)
return 0;
else
return(1 + countnode(start - >right ));
}

3.5. A Complete Source Code for the Implementation of Double Linked List:

#include <stdio.h>
#include <stdlib.h>
#include <conio.h>

struct dlinklist
{
struct dlinklist *left;
int data;
struct dlinklist *right;
};

typedef struct dlinklist node;


node *start = NULL;
node* getnode()
{
node * newnode;
newnode = (node *) malloc(sizeof(node));
printf("\n Enter data: ");
scanf("%d", &newnode -> data);
newnode -> left = NULL;
newnode -> right = NULL;
return newnode;
}

int countnode(node *start)


{
if(start == NULL)
return 0;
else
return 1 + countnode(start -> right);
}

int menu()
{
int ch;
clrscr();
printf("\n 1.Create");
printf("\n------------------------------");
printf("\n 2. Insert a node at beginning ");
printf("\n 3. Insert a node at end");
printf("\n 4. Insert a node at middle");
printf("\n------------------------------");
printf("\n 5. Delete a node from beginning");
printf("\n 6. Delete a node from Last");
printf("\n 7. Delete a node from Middle");
printf("\n------------------------------");
printf("\n 8. Traverse the list from Left to Right
"); printf("\n 9. Traverse the list from Right to
Left "); printf("\n------------------------------");
printf("\n 10.Count the Number of nodes in the list");
printf("\n 11.Exit ");
printf("\n\n Enter your choice: ");
scanf("%d", &ch);
return ch;
}

void createlist(int n)
{
int i;
node *newnode;
node *temp;
for(i = 0; i < n; i++)
{
newnode = getnode();
if(start == NULL)
start = newnode;
else
{
temp = start;
while(temp -> right)
temp = temp -> right;
temp -> right = newnode;
newnode -> left = temp;
}
}
}
void traverse_left_to_right()
{
node *temp;
temp = start;
printf("\n The contents of List: ");
if(start == NULL )
printf("\n Empty List");
else
{
while(temp != NULL)
{
printf("\t %d ", temp -> data);
temp = temp -> right;
}
}
}
void traverse_right_to_left()
{
node *temp;
temp = start;
printf("\n The contents of List: ");
if(start == NULL)
printf("\n Empty List");
else
{
while(temp -> right != NULL)
temp = temp -> right;
}
while(temp != NULL)
{
printf("\t%d", temp -> data);
temp = temp -> left;
}
}
void dll_insert_beg()
{
node *newnode;
newnode = getnode();
if(start == NULL)
start = newnode;
else
{
newnode -> right = start;
start -> left = newnode;
start = newnode;
}
}

void dll_insert_end()
{
node *newnode, *temp;
newnode = getnode();
if(start == NULL)
start = newnode;
else
{
temp = start;
while(temp -> right != NULL)
temp = temp -> right;
temp -> right = newnode;
newnode -> left = temp;
}
}
void dll_insert_mid()
{
node *newnode,*temp;
int pos, nodectr, ctr = 1;
newnode = getnode();
printf("\n Enter the position: ");
scanf("%d", &pos);
nodectr = countnode(start);
if(pos - nodectr >= 2)
{
printf("\n Position is out of range..");
return;
}
if(pos > 1 && pos < nodectr)
{
temp = start;
while(ctr < pos - 1)
{
temp = temp -> right;
ctr++;
}
newnode -> left = temp;
newnode -> right = temp -> right;
temp -> right -> left = newnode;
temp -> right = newnode;
}
else
printf("position %d of list is not a middle position ", pos);
}

void dll_delete_beg()
{
node *temp;
if(start == NULL)
{
printf("\n Empty list");
getch();
return ;
}
else
{
temp = start;
start = start -> right;
start -> left = NULL;
free(temp);
}
}

void dll_delete_last()
{
node *temp;
if(start == NULL)
{
printf("\n Empty list");
getch();
return ;
}
else
{
temp = start;
while(temp -> right != NULL)
temp = temp -> right;
temp -> left -> right = NULL;
free(temp);
temp = NULL;
}
}

void dll_delete_mid()
{
int i = 0, pos, nodectr;
node *temp;
if(start == NULL)
{
printf("\n Empty List");
getch();
return;
}
else
{
printf("\n Enter the position of the node to delete: ");
scanf("%d", &pos);
nodectr = countnode(start);
if(pos > nodectr)
{
printf("\nthis node does not exist");
getch();
return;
}
if(pos > 1 && pos < nodectr)
{
temp = start;
i = 1;
while(i < pos)
{
temp = temp -> right;
i++;
}
temp -> right -> left = temp -> left;
temp -> left -> right = temp -> right;
free(temp);
printf("\n node deleted..");
}
else
{
printf("\n It is not a middle position..");
getch();
}
}
}

void main(void)
{
int ch, n;
clrscr();
while(1)
{
ch = menu();
switch( ch)
{
case 1 :
printf("\n Enter Number of nodes to create: ");
scanf("%d", &n);
createlist(n);
printf("\n List created..");
break;
case 2 :
dll_insert_beg();
break;
case 3 :
dll_insert_end();
break;
case 4 :
dll_insert_mid();
break;
case 5 :
dll_delete_beg();
break;
case 6 :
dll_delete_last();
break;
case 7 :
dll_delete_mid();
break;
case 8 :
traverse_left_to_right();
break;
case 9 :
traverse_right_to_left();
break;

case 10 :
printf("\n Number of nodes: %d", countnode(start));
break;
case 11:
exit(0);
}
getch();
}
}

3.7. Circular Single Linked List:

It is just a single linked list in which the link field of the last node points back to the
address of the first node. A circular linked list has no beginning and no end. It is
necessary to establish a special pointer called start pointer always pointing to the first
node of the list. Circular linked lists are frequently used instead of ordinary linked list
because many operations are much easier to implement. In circular linked list no null
pointers are used, hence all pointers contain valid address.

A circular single linked list is shown in figure 3.6.1.

start
100

10 200 20 300 30 400 40 100

200 300 400

Figure 3.6.1. Circular Single Linked List


The basic operations in a circular single linked list are:

Creation.
Insertion.
Deletion.
Traversing.

The following steps

Get the new node using getnode().

newnode = getnode();

If the list is empty, assign new node as start.

start = newnode;

If the list is not empty, follow the steps given below:

temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;

newnode -> next = start;

Inserting a node at the beginning:

The following steps are to be followed to insert a new node at the beginning of the
circular list:

Get the new node using getnode().

newnode = getnode();

If the list is empty, assign new node as start.

start = newnode;
newnode -> next = start;

If the list is not empty, follow the steps given below:

last = start;
while(last -> next != start)
last = last -> next;
newnode -> next = start;
start = newnode;
last -> next = start;
The function cll_insert_beg(), is used for inserting a node at the beginning. Figure 3.6.2
shows inserting a node into the circular single linked list at the beginning.

start
500

10 200 20 300 30 400 40 500

100

5 100

500

Figure 3.6.2. Inserting a node at the beginning

Inserting a node at the end:

The following steps are followed to insert a new node at the end of the list:

Get the new node using getnode().

newnode = getnode();

If the list is empty, assign new node as start.

start = newnode;
newnode -> next = start;

If the list is not empty follow the steps given below:

temp = start;
while(temp -> next != start)
temp = temp -> next;
temp -> next = newnode;
newnode -> next = start;

The function cll_insert_end(), is used for inserting a node at the end.

Figure 3.6.3 shows inserting a node into the circular single linked list at the end.

start
100

10 200 20 300 30 400 40 500

50 100

Figure 3.6.3 Inserting a node at the end.


Deleting a node at the beginning:

The following steps are followed, to delete a node at the beginning of the list:

If the list is not empty, follow the steps given below:

last = temp = start;


while(last -> next != start)
last = last -> next;
start = start -> next;
last -> next = start;

After deleting the node, if the list is empty then start = NULL.

The function cll_delete_beg(), is used for deleting the first node in the list. Figure 3.6.4
shows deleting a node at the beginning of a circular single linked list.

start
200

10 20 0 20 30 0 30 40 0
40 200
200 300 400
temp

Figure 3.6.4. Deleting a node at beginning.

Deleting a node at the end:

The following steps are followed to delete a node at the end of the list:

If the list is not empty, follow the steps given below:

temp = start;
prev = start;
while(temp -> next != start)
{
prev = temp;
temp = temp -> next;
}
prev -> next = start;

After deleting the node, if the list is empty then start = NULL.

The function cll_delete_last(), is used for deleting the last node in the list.
Figure 3.6.5 shows deleting a node at the end of a circular single linked list.

start
100

10 200 20 300 30 100 40 100

100 200 300

Figure 3.6.5. Deleting a node at the end.

Traversing a circular single linked list from left to right:

The following steps are followed, to traverse a list from left to right:

If

If the list is not empty, follow the steps given below:

temp = start;
do
{
printf("%d ", temp -> data);
temp = temp -> next;
} while(temp != start);

3.7.1. Source Code for Circular Single Linked List:

# include <stdio.h>
# include <conio.h>
# include <stdlib.h>

struct cslinklist
{
int data;
struct cslinklist *next;
};

typedef struct cslinklist node;

node *start = NULL;

int nodectr;

node* getnode()
{
node * newnode;
newnode = (node *) malloc(sizeof(node));
printf("\n Enter data: ");
scanf("%d", &newnode -> data);
newnode -> next = NULL;
return newnode;
}
int menu()
{
int ch;
clrscr();
printf("\n 1. Create a list ");
printf("\n\n--------------------------");
printf("\n 2. Insert a node at beginning ");
printf("\n 3. Insert a node at end");
printf("\n 4. Insert a node at middle");
printf("\n\n--------------------------");
printf("\n 5. Delete a node from beginning");
printf("\n 6. Delete a node from Last");
printf("\n 7. Delete a node from Middle");
printf("\n\n--------------------------");
printf("\n 8. Display the list");
printf("\n 9. Exit");
printf("\n\n--------------------------");
printf("\n Enter your choice: ");
scanf("%d", &ch);
return ch;
}

void createlist(int n)
{
int i;
node *newnode;
node *temp;
nodectr = n;
for(i = 0; i < n ; i++)
{
newnode = getnode();
if(start == NULL)
{
start = newnode;
}
else
{
temp = start;
while(temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;
}
}
newnode ->next = start; /* last node is pointing to starting node */
}

void display()
{
node *temp;
temp = start;
printf("\n The contents of List (Left to Right): ");
if(start == NULL )
printf("\n Empty List");
else
{
do
{
printf("\t %d ", temp -> data);
temp = temp -> next;
} while(temp !=
start); printf(" X ");
}
}
void cll_insert_beg()
{
node *newnode, *last;
newnode = getnode();
if(start == NULL)
{
start = newnode;
newnode -> next = start;
}
else
{
last = start;
while(last -> next != start)
last = last -> next;
newnode -> next = start;
start = newnode;
last -> next = start;
}
printf("\n Node inserted at beginning..");
nodectr++;
}

void cll_insert_end()
{
node *newnode, *temp;
newnode = getnode();
if(start == NULL )
{
start = newnode;
newnode -> next = start;
}
else
{
temp = start;
while(temp -> next != start)
temp = temp -> next;
temp -> next = newnode;
newnode -> next = start;
}
printf("\n Node inserted at end..");
nodectr++;
}

void cll_insert_mid()
{
node *newnode, *temp, *prev;
int i, pos ;
newnode = getnode();
printf("\n Enter the position: ");
scanf("%d", &pos);
if(pos > 1 && pos < nodectr)
{
temp = start;
prev = temp;
i = 1;
while(i < pos)
{
prev = temp;
temp = temp -> next;
i++;
}
prev -> next = newnode;
newnode -> next = temp;
nodectr++;
printf("\n Node inserted at middle..");
}
else
{
printf("position %d of list is not a middle position ", pos);
}
}

void cll_delete_beg()
{
node *temp, *last;
if(start == NULL)
{
printf("\n No nodes exist..");
getch();
return ;
}
else
{
last = temp = start;
while(last -> next != start)
last = last -> next;
start = start -> next;
last -> next = start;
free(temp);
nodectr--;
printf("\n Node deleted..");
if(nodectr == 0)
start = NULL;
}
}

void cll_delete_last()
{
node *temp,*prev;
if(start == NULL)
{
printf("\n No nodes exist..");
getch();
return ;
}
else
{
temp = start;
prev = start;
while(temp -> next != start)
{
prev = temp;
temp = temp -> next;
}
prev -> next = start;
free(temp);
nodectr--;
if(nodectr == 0)
start = NULL;
printf("\n Node deleted..");
}
}
void cll_delete_mid()
{
int i = 0, pos;
node *temp, *prev;

if(start == NULL)
{
printf("\n No nodes exist..");
getch();
return ;
}
else
{
printf("\n Which node to delete: ");
scanf("%d", &pos);
if(pos > nodectr)
{
printf("\nThis node does not exist");
getch();
return;
}
if(pos > 1 && pos < nodectr)
{
temp=start;
prev = start;
i = 0;
while(i < pos - 1)
{
prev = temp;
temp = temp -> next ;
i++;
}
prev -> next = temp -> next;
free(temp);
nodectr--;
printf("\n Node Deleted..");
}
else
{
printf("\n It is not a middle position..");
getch();
}
}
}

void main(void)
{
int result;
int ch, n;
clrscr();
while(1)
{
ch = menu();
switch(ch)
{
case 1 :
if(start == NULL)
{
printf("\n Enter Number of nodes to create: ");
scanf("%d", &n);
createlist(n);
printf("\nList created..");
}
else
printf("\n List is already Exist..");
break;
case 2 :
cll_insert_beg();
break;
case 3 :
cll_insert_end();
break;
case 4 :
cll_insert_mid();
break;
case 5 :
cll_delete_beg();
break;
case 6 :
cll_delete_last();
break;
case 7 :
cll_delete_mid();
break;
case 8 :
display();
break;
case 9 :
exit(0);
}
getch();
}
}

3.8. Circular Double Linked List:

A circular double linked list has both successor pointer and predecessor pointer in
circular manner. The objective behind considering circular double linked list is to
simplify the insertion and deletion operations performed on double linked list. In
circular double linked list the right link of the right most node points back to the start
node and left link of the first node points to the last node. A circular double linked list is
shown in figure 3.8.1.

100

start 300 10 200 100 20 300 200 30 100

100 200

Figure 3.8.1. Circular Double Linked List

The basic operations in a circular double linked list are:

Creation.
Insertion.
Deletion.
Traversing.
The following steps are to be followed to create

Get the new node using getnode().


newnode = getnode();

If the list is empty, then do the following


start = newnode;
newnode -> left = start;
newnode ->right = start;

If the list is not empty, follow the steps given below:


newnode -> left = start -> left;
newnode -> right = start; start
-> left->right = newnode; start
-> left = newnode;

Inserting a node at the beginning:

The following steps are to be followed to insert a new node at the beginning of the list:

Get the new node using getnode().


newnode=getnode();

If the list is empty, then


start = newnode;
newnode -> left = start;
newnode -> right = start;

If the list is not empty, follow the steps given


below: newnode -> left = start -> left;
newnode -> right = start; start -
> left -> right = newnode; start
-> left = newnode;
start = newnode;

The function cdll_insert_beg(), is used for inserting a node at the beginning. Figure
3.8.2 shows inserting a node into the circular double linked list at the beginning.

start

400

400 10 200 100 20 300 200 30 400

200

300 40 100

400

Figure 3.8.2. Inserting a node at the beginning


Inserting a node at the end:

The following steps are followed to insert a new node at the end of the list:

Get the new node using getnode()


newnode=getnode();

If the list is empty, then


start = newnode;
newnode -> left = start;
newnode -> right = start;

If the list is not empty follow the steps given below:


newnode -> left = start -> left;
newnode -> right = start; start -
> left -> right = newnode; start
-> left = newnode;

The function cdll_insert_end(), is used for inserting a node at the end. Figure 3.8.3
shows inserting a node into the circular linked list at the end.

start

100

400 10 200 100 20 300 200 30 400

200 300

300 40 100

400

Figure 3.8.3. Inserting a node at the end

Inserting a node at an intermediate position:

The following steps are followed, to insert a new node in an intermediate position in the
list:

Get the new node using getnode().


newnode=getnode();

Ensure that the specified position is in between first node and last node. If
not, specified position is invalid. This is done by countnode() function.

Store the starting address (which is in start pointer) in temp. Then traverse
the temp pointer upto the specified position.

After reaching the specified position, follow the steps given below:
newnode -> left = temp;
newnode -> right = temp -> right;
temp -> right -> left = newnode;
temp -> right = newnode;
nodectr++;
The function cdll_insert_mid(), is used for inserting a node in the intermediate position.
Figure 3.8.4 shows inserting a node into the circular double linked list at a specified
intermediate position other than beginning and end.

Start
100 40 200
100

400
300 10 400 400 20 300
100 200

200 30 100
300

Figure 3.8.4. Inserting a node at an intermediate position

Deleting a node at the beginning:

The following steps are followed, to delete a node at the beginning of the list:

If the list is not empty, follow the steps given below:

temp = start;
start = start -> right;
temp -> left -> right = start;
start -> left = temp -> left;

The function cdll_delete_beg(), is used for deleting the first node in the list. Figure
3.8.5 shows deleting a node at the beginning of a circular double linked list.

start
200

300 10 200 300 20 300 200 30 200


200

Figure 3.8.5. Deleting a node at beginning

Deleting a node at the end:

The following steps are followed to delete a node at the end of the list:

If the list is not empty, follow the steps given below:


temp = start;
while(temp -> right != start)
{
temp = temp -> right;
}
temp -> left -> right = temp -> right;
temp -> right -> left = temp -> left;

The function cdll_delete_last(), is used for deleting the last node in the list. Figure 3.8.6
shows deleting a node at the end of a circular double linked list.

start
100

200 10 200 100 20 100 200 30 100


1 3
0 0
0 0

Figure 3.8.6. Deleting a node at the end

Deleting a node at Intermediate position:

The following steps are followed, to delete a node from an intermediate position in the
list (List must contain more than two node).

If list is empty then display

If the list is not empty, follow the steps given below:

Get the position of the node to delete.

Ensure that the specified position is in between first node and last
node. If not, specified position is invalid.

Then perform the following steps:

if(pos > 1 && pos < nodectr)


{

temp = start;
i = 1;
while(i < pos)
{
temp = temp -> right ;
i++;
}
temp -> right -> left = temp -> left;
temp -> left -> right = temp -> right;
free(temp);
printf("\n node deleted..");
nodectr--;
}

The function cdll_delete_mid(), is used for deleting the intermediate node in the list.
Figure 3.8.7 shows deleting a node at a specified intermediate position other than
beginning and end from a circular double linked list.

start
100

300 10 300 100 20 300


100 30 100

200

Figure 3.8.7. Deleting a node at an intermediate position

Traversing a circular double linked list from left to right:

The following steps are followed, to traverse a list from left to right:

If list is empty

If the list is not empty, follow the steps given below:


temp = start;
Print temp -> data;
temp = temp -> right;
while(temp != start)
{
print temp -> data;
temp = temp -> right;
}

The function cdll_display_left _right(), is used for traversing from left to right.

Traversing a circular double linked list from right to left:

The following steps are followed, to traverse a list from right to left:

If the list is not empty, follow the steps given below:


temp = start;
do
{
temp = temp -> left;
print temp -> data;
} while(temp != start);

The function cdll_display_right_left(), is used for traversing from right to left.

3.8.1. Source Code for Circular Double Linked List:

# include <stdio.h>
# include <stdlib.h>
# include <conio.h>
struct cdlinklist
{
struct cdlinklist *left;
int data;
struct cdlinklist *right;
};

typedef struct cdlinklist node;


node *start = NULL;
int nodectr;

node* getnode()
{
node * newnode;
newnode = (node *) malloc(sizeof(node));
printf("\n Enter data: ");
scanf("%d", &newnode -> data);
newnode -> left = NULL;
newnode -> right = NULL;
return newnode;
}

int menu()
{
int ch;
clrscr();
printf("\n 1. Create ");
printf("\n\n--------------------------");
printf("\n 2. Insert a node at Beginning");
printf("\n 3. Insert a node at End");
printf("\n 4. Insert a node at Middle");
printf("\n\n--------------------------");
printf("\n 5. Delete a node from Beginning");
printf("\n 6. Delete a node from End");
printf("\n 7. Delete a node from Middle");
printf("\n\n--------------------------");
printf("\n 8. Display the list from Left to Right");
printf("\n 9. Display the list from Right to Left");
printf("\n 10.Exit");
printf("\n\n Enter your choice: ");
scanf("%d", &ch);
return ch;
}

void cdll_createlist(int n)
{
int i;
node *newnode, *temp;
if(start == NULL)
{
nodectr = n;
for(i = 0; i < n; i++)
{
newnode = getnode();
if(start == NULL)
{
start = newnode;
newnode -> left = start;
newnode ->right = start;
}
else
{
newnode -> left = start -> left;
newnode -> right = start;
start -> left->right = newnode;
start -> left = newnode;
}
}
}
else
printf("\n List already exists..");
}

void cdll_display_left_right()
{
node *temp;
temp = start;
if(start == NULL)
printf("\n Empty List");
else
{
printf("\n The contents of List: ");
printf(" %d ", temp -> data);
temp = temp -> right;
while(temp != start)
{
printf(" %d ", temp -> data);
temp = temp -> right;
}
}
}

void cdll_display_right_left()
{
node *temp;
temp = start;
if(start == NULL)
printf("\n Empty List");
else
{
printf("\n The contents of List: ");
do
{
temp = temp -> left;
printf("\t%d", temp -> data);
} while(temp != start);
}
}

void cdll_insert_beg()
{
node *newnode;
newnode = getnode();
nodectr++;
if(start == NULL)
{
start = newnode;
newnode -> left = start;
newnode -> right = start;
}
else
{
newnode -> left = start -> left;
newnode -> right = start;
start -> left -> right = newnode;
start -> left = newnode;
start = newnode;
}
}

void cdll_insert_end()
{
node *newnode,*temp;
newnode = getnode();
nodectr++;
if(start == NULL)
{
start = newnode;
newnode -> left = start;
newnode -> right = start;
}
else
{
newnode -> left = start -> left;
newnode -> right = start;
start -> left -> right = newnode;
start -> left = newnode;
}
printf("\n Node Inserted at End");
}

void cdll_insert_mid()
{
node *newnode, *temp, *prev;
int pos, ctr = 1;
newnode = getnode();
printf("\n Enter the position: ");
scanf("%d", &pos);
if(pos - nodectr >= 2)
{
printf("\n Position is out of range..");
return;
}
if(pos > 1 && pos <= nodectr)
{
temp = start;
while(ctr < pos - 1)
{
temp = temp -> right;
ctr++;
}
newnode -> left = temp;
newnode -> right = temp -> right;
temp -> right -> left = newnode;
temp -> right = newnode;
nodectr++;
printf("\n Node Inserted at Middle.. ");
}
else
printf("position %d of list is not a middle position", pos);
}
}

void cdll_delete_beg()
{
node *temp;
if(start == NULL)
{
printf("\n No nodes exist..");
getch();
return ;
}
else
{
nodectr--;
if(nodectr == 0)
{
free(start);
start = NULL;
}
else
{
temp = start;
start = start -> right;
temp -> left -> right = start;
start -> left = temp -> left;
free(temp);
}
printf("\n Node deleted at Beginning..");
}
}

void cdll_delete_last()
{
node *temp;
if(start == NULL)
{
printf("\n No nodes exist..");
getch();
return;
}
else
{
nodectr--;
if(nodectr == 0)
{
free(start);
start = NULL;
}
else
{
temp = start;
while(temp -> right != start)
temp = temp -> right;
temp -> left -> right = temp -> right;
temp -> right -> left = temp -> left;
free(temp);
}
printf("\n Node deleted from end ");
}
}

void cdll_delete_mid()
{
int ctr = 1, pos;
node *temp;
if( start == NULL)
{
printf("\n No nodes exist..");
getch();
return;
}
else
{
printf("\n Which node to delete: ");
scanf("%d", &pos);
if(pos > nodectr)
{
printf("\nThis node does not exist");
getch();
return;
}
if(pos > 1 && pos < nodectr)
{
temp = start;
while(ctr < pos)
{
temp = temp -> right ;
ctr++;
}
temp -> right -> left = temp -> left;
temp -> left -> right = temp -> right;
free(temp);
printf("\n node deleted..");
nodectr--;
}
else
{
printf("\n It is not a middle position..");
getch();
}
}
}

void main(void)
{
int ch,n;
clrscr();
while(1)
{
ch = menu();
switch( ch)
{
case 1 :
printf("\n Enter Number of nodes to create: ");
scanf("%d", &n);
cdll_createlist(n);
printf("\n List created..");
break;
case 2 :
cdll_insert_beg();
break;
case 3 :
cdll_insert_end();
break;
case 4 :
cdll_insert_mid();
break;
case 5 :
cdll_delete_beg();
break;
case 6 :
cdll_delete_last();
break;
case 7 :
cdll_delete_mid();
break;
case 8 :
cdll_display_left_right();
break;
case 9 :
cdll_display_right_left();
break;
case 10:
exit(0);
}
getch();
}
}

3.9. Comparison of Linked List Variations:

The major disadvantage of doubly linked lists (over singly linked lists) is that they
require more space (every node has two pointer fields instead of one). Also, the code
to manipulate doubly linked lists needs to maintain the prev fields as well as the next
fields; the more fields that have to be maintained, the more chance there is for errors.

The major advantage of doubly linked lists is that they make some operations (like the
removal of a given node, or a right-to-left traversal of the list) more efficient.

The major advantage of circular lists (over non-circular lists) is that they eliminate
some extra-case code for some operations (like deleting last node). Also, some
applications lead naturally to circular list representations. For example, a computer
network might best be modeled using a circular list.

Exercise

1.
linked list into two lists in the following way. Let the list be L = (l 0, l1 n).
The resultant lists would be R1 = (l0, l2, l4 2 = (l1, l3, l5

2.

3. linked list

4. Suppose that an ordered list L = (l0, l1 n) is represented by a single linked


list. It is required to append the list L = (ln, l0, l1 n) after another ordered
list M represented by a single linked list.
5. Implement the following function as a new function for the linked list
toolkit.

Precondition: head_ptr points to the start of a linked list. The list might
be empty or it might be non-empty.

Postcondition: The return value is the number of occurrences of 42 in


the data field of a node on the linked list. The list itself is unchanged.

6. Implement the following function as a new function for the linked list
toolkit.

Precondition: head_ptr points to the start of a linked list. The list might
be empty or it might be non-empty.

Postcondition: The return value is true if the list has at least one
occurrence of the number 42 in the data part of a node.

7. Implement the following function as a new function for the linked list
toolkit.

Precondition: head_ptr points to the start of a linked list. The list might
be empty or it might be non-empty.

Postcondition: The return value is the sum of all the data components of
all the nodes. NOTE: If the list is empty, the function returns 0.

8.
another circular linked list.

9.
columns using linked list.

10.
properly formatted, with zero being printed in place of zero elements.

11.

1. Add two m X n sparse matrices and


2. Multiply two m X n sparse matrices.

Where all sparse matrices are to be represented by linked lists.

13.
to delete the ith node from the list.
Multiple Choice Questions

Which among the following is a linear data structure: [ ]


A. Queue C. Linked List
B. Stack D. all the above

Which among the following is a dynamic data structure: [ ]


A. Double Linked List C. Stack
B. Queue D. all the above

The link field in a node contains: [ ]


A. address of the next node C. data of next node
B. data of previous node D. data of current node

Memory is allocated dynamically to a data structure during execution [ ]


by ------- function.
A. malloc() C. realloc()
B. Calloc() D. all the above

How many null pointer/s exist in a circular double linked list? [ ]


A. 1 C. 3
B. 2 D. 0
[ ]
6. Suppose that p is a pointer variable that contains the NULL pointer.
What happens if your program tries to read or write *p?
A. A syntax error always occurs at compilation time.
B. A run-time error always occurs when *p is evaluated.
C. A run-time error always occurs when the program finishes.
D. The results are unpredictable.
[ ]
7. What kind of list is best to answer questions such as: "What is the
item at position n?"
A. Lists implemented with an array.
B. Doubly-linked lists.
C. Singly-linked lists.
D. Doubly-linked or singly-linked lists are equally best.

8. In a single linked list which operation depends on the length of the list. [ ]
A. Delete the last element of the list
B. Add an element before the first element of the list
C. Delete the first element of the list
D. Interchange the first two elements of the list

9. A double linked list is declared as follows: [ ]


struct dllist
{
struct dllist *fwd, *bwd;
int data;
}
Where fwd and bwd represents forward and backward links to adjacent
elements of the list. Which among the following segments of code
deletes the element pointed to by X from the double linked list, if it is
assumed that X points to neither the first nor last element of the list?
A. X -> bwd -> fwd = X -> fwd;
X -> fwd -> bwd = X -> bwd
B. X -> bwd -> fwd = X -> bwd;
X -> fwd -> bwd = X -> fwd
C. X -> bwd -> bwd = X -> fwd;
X -> fwd -> fwd = X -> bwd
D. X -> bwd -> bwd = X -> bwd;
X -> fwd -> fwd = X -> fwd

10. Which among the following segment of code deletes the element [ ]
pointed to by X from the double linked list, if it is assumed that X
points to the first element of the list and start pointer points to
beginning of the list?
A. X -> bwd = X -> fwd;
X -> fwd = X -> bwd
B. start = X -> fwd;
start -> bwd = NULL;
C. start = X -> fwd;
X -> fwd = NULL
D. X -> bwd -> bwd = X -> bwd;
X -> fwd -> fwd = X -> fwd

11. Which among the following segment of code deletes the element [ ]
pointed to by X from the double linked list, if it is assumed that X
points to the last element of the list?
A. X -> fwd -> bwd = NULL;
B. X -> bwd -> fwd = X -> bwd;
C. X -> bwd -> fwd = NULL;
D. X -> fwd -> bwd = X -> bwd;

12. Which among the following segment of code counts the number of [ ]
elements in the double linked list, if it is assumed that X points to the
first element of the list and ctr is the variable which counts the number
of elements in the list?
A. for (ctr=1; X != NULL; ctr++)
X = X -> fwd;
B. for (ctr=1; X != NULL; ctr++)
X = X -> bwd;
C. for (ctr=1; X -> fwd != NULL; ctr++)
X = X -> fwd;
D. for (ctr=1; X -> bwd != NULL; ctr++)
X = X -> bwd;

13. Which among the following segment of code counts the number of [ ]
elements in the double linked list, if it is assumed that X points to the
last element of the list and ctr is the variable which counts the number
of elements in the list?
A. for (ctr=1; X != NULL; ctr++)
X = X -> fwd;
B. for (ctr=1; X != NULL; ctr++)
X = X -> bwd;
C. for (ctr=1; X -> fwd != NULL; ctr++)
X = X -> fwd;
D. for (ctr=1; X -> bwd != NULL; ctr++)
X = X -> bwd;
14. Which among the following segment of code inserts a new node [ ]
pointed by X to be inserted at the beginning of the double linked list.
The start pointer points to beginning of the list?

A. X -> bwd = X -> fwd;


X -> fwd = X -> bwd;
B. X -> fwd = start;
start -> bwd = X;
start = X;
C. X -> bwd = X -> fwd;
X -> fwd = X -> bwd;
start = X;
D. X -> bwd -> bwd = X -> bwd;
X -> fwd -> fwd = X -> fwd

15. Which among the following segments of inserts a new node pointed by [ ]
X to be inserted at the end of the double linked list. The start and last
pointer points to beginning and end of the list respectively?

A. X -> bwd = X -> fwd;


X -> fwd = X -> bwd
B. X -> fwd = start;
start -> bwd = X;
C. last -> fwd = X;
X -> bwd = last;
D. X -> bwd = X -> bwd;
X -> fwd = last;

16. Which among the following segments of inserts a new node pointed by X to be
inserted at any position (i.e neither first nor last) element of
[ ]
the double linked list? Assume temp pointer points to the
previous position of new node.

A. X -> bwd -> fwd = X -> fwd;


X -> fwd -> bwd = X -> bwd
B. X -> bwd -> fwd = X -> bwd;
X -> fwd -> bwd = X -> fwd
C. temp -> fwd = X;
temp -> bwd = X -> fwd;
X ->fwd = x
X ->fwd->bwd = temp
D. X -> bwd = temp;
X -> fwd = temp -> fwd;
temp ->fwd = X;
X -> fwd -> bwd = X;
17. A single linked list is declared as follows: [ ]
struct sllist
{
struct sllist *next;
int data;
}
Where next represents links to adjacent elements of the list.

Which among the following segments of code deletes the element


pointed to by X from the single linked list, if it is assumed that X
points to neither the first nor last element of the list? prev pointer
points to previous element.

A. prev -> next = X -> next;


free(X);
B. X -> next = prev-> next;
free(X);
C. prev -> next = X -> next;
free(prev);
D. X -> next = prev -> next;
free(prev);

18. Which among the following segment of code deletes the element [ ]
pointed to by X from the single linked list, if it is assumed that X
points to the first element of the list and start pointer points to
beginning of the list?

A. X = start -> next;


free(X);
B. start = X -> next;
free(X);
C. start = start -> next;
free(start);
D. X = X -> next;
start = X;
free(start);

19. Which among the following segment of code deletes the element [ ]
pointed to by X from the single linked list, if it is assumed that X points
to the last element of the list and prev pointer points to last but one
element?

A. prev -> next = NULL;


free(prev);
B. X -> next = NULL;
free(X);
C. prev -> next = NULL;
free(X);
D X -> next = prev;
free(prev);
20. Which among the following segment of code counts the number of [ ]
elements in the single linked list, if it is assumed that X points to the
first element of the list and ctr is the variable which counts the number
of elements in the list?

A. for (ctr=1; X != NULL; ctr++)


X = X -> next;
B. for (ctr=1; X != NULL; ctr--)
X = X -> next;
C. for (ctr=1; X -> next != NULL; ctr++)
X = X -> next;
D. for (ctr=1; X -> next != NULL; ctr--)
X = X -> next;

21. Which among the following segment of code inserts a new node [ ]
pointed by X to be inserted at the beginning of the single linked list.
The start pointer points to beginning of the list?

A. start -> next = X;


X = start;
B. X -> next = start;
start = X
C. X -> next = start -> next;
start = X
D. X -> next = start;
start = X -> next

22. Which among the following segments of inserts a new node pointed by [ ]
X to be inserted at the end of the single linked list. The start and last
pointer points to beginning and end of the list respectively?

A. last -> next = X;


X -> next = start;
B. X -> next = last;
last ->next = NULL;
C. last -> next = X;
X -> next = NULL;
D. last -> next = X -> next;
X -> next = NULL;

23. Which among the following segments of inserts a new node pointed by [ ]
X to be inserted at any position (i.e neither first nor last) element of
the single linked list? Assume prev pointer points to the previous
position of new node.

A. X -> next = prev -> next;


prev -> next = X -> next;
B. X = prev -> next;
prev -> next = X -> next;
C. X -> next = prev;
prev -> next = X;
D. X -> next = prev -> next;
prev -> next = X;
24. A circular double linked list is declared as follows: [ ]
struct cdllist
{
struct cdllist *fwd, *bwd;
int data;
}
Where fwd and bwd represents forward and backward links to adjacent
elements of the list.

Which among the following segments of code deletes the element


pointed to by X from the circular double linked list, if it is assumed
that X points to neither the first nor last element of the list?

A. X -> bwd -> fwd = X -> fwd;


X -> fwd -> bwd = X -> bwd;
B. X -> bwd -> fwd = X -> bwd;
X -> fwd -> bwd = X -> fwd;
C. X -> bwd -> bwd = X -> fwd;
X -> fwd -> fwd = X -> bwd;
D. X -> bwd -> bwd = X -> bwd;
X -> fwd -> fwd = X -> fwd;

25. Which among the following segment of code deletes the element [ ]
pointed to by X from the circular double linked list, if it is assumed
that X points to the first element of the list and start pointer points to
beginning of the list?

A. start = start -> bwd;


X -> bwd -> bwd = start;
start -> bwd = X -> bwd;
B. start = start -> fwd;
X -> fwd -> fwd = start;
start -> bwd = X -> fwd
C. start = start -> bwd;
X -> bwd -> fwd = X;
start -> bwd = X -> bwd
D. start = start -> fwd;
X -> bwd -> fwd = start;
start -> bwd = X -> bwd;

26. Which among the following segment of code deletes the element [ ]
pointed to by X from the circular double linked list, if it is assumed
that X points to the last element of the list and start pointer points to
beginning of the list?

A. X -> bwd -> fwd = X -> fwd;


X -> fwd -> fwd= X -> bwd;
B. X -> bwd -> fwd = X -> fwd;
X -> fwd -> bwd = X -> bwd;
C. X -> fwd -> fwd = X -> bwd;
X -> fwd -> bwd= X -> fwd;
D. X -> bwd -> bwd = X -> fwd;
X -> bwd -> bwd = X -> bwd;
27. Which among the following segment of code counts the number of [ ]
elements in the circular double linked list, if it is assumed that X and
start points to the first element of the list and ctr is the variable which
counts the number of elements in the list?
A. for (ctr=1; X->fwd != start; ctr++)
X = X -> fwd;
B. for (ctr=1; X != NULL; ctr++)
X = X -> bwd;
C. for (ctr=1; X -> fwd != NULL; ctr++)
X = X -> fwd;
D. for (ctr=1; X -> bwd != NULL; ctr++)
X = X -> bwd;

28. Which among the following segment of code inserts a new node [ ]
pointed by X to be inserted at the beginning of the circular double
linked list. The start pointer points to beginning of the list?
A. X -> bwd = start; C. X -> fwd = start -> bwd;
X -> fwd = start -> fwd; X -> bwd = start;
start -> bwd-> fwd = X; start -> bwd-> fwd = X;
start -> bwd = X; start -> bwd = X;
start = X start = X

B. X -> bwd = start -> D. X -> bwd = start ->


bwd; X -> fwd = start; bwd; X -> fwd = start;
start -> bwd-> fwd = start -> fwd-> fwd = X;
X; start -> bwd = X; start -> fwd = X;
start = X X = start;

29. Which among the following segment of code inserts a new node [ ]
pointed by X to be inserted at the end of the circular double linked list.
The start pointer points to beginning of the list?
A. X -> bwd = start; C. X -> bwd= start -> bwd;
X -> fwd = start -> fwd; X-> fwd = start;
start -> bwd -> fwd = X; start -> bwd -> fwd = X;
start -> bwd = X; start -> bwd = X;
start = X
D. X -> bwd = start -> bwd;
B. X -> bwd = start -> bwd; X -> fwd = start;
X -> fwd = start; start -> fwd-> fwd = X;
start -> bwd -> fwd = X; start -> fwd = X;
start -> bwd = X; X = start;
start = X

30. Which among the following segments of inserts a new node pointed by [ ]
X to be inserted at any position (i.e neither first nor last) element of
the circular double linked list? Assume temp pointer points to the
previous position of new node.
A. X -> bwd -> fwd = X -> fwd; C. temp -> fwd = X;
X -> fwd -> bwd = X -> bwd; temp -> bwd = X -> fwd;
X -> fwd = X;
B. X -> bwd -> fwd = X -> bwd; X -> fwd -> bwd = temp;
X -> fwd -> bwd = X -> fwd;
D. X -> bwd = temp;
X -> fwd = temp -> fwd;
temp -> fwd = X;
X -> fwd -> bwd = X;
Chapter
4
Stack and Queue
There are certain situations in computer science that one wants to
restrict insertions and deletions so that they can take place only at the
beginning or the end of the list, not in the middle. Two of such data
structures that are useful are:

Stack.
Queue.
Linear lists and arrays allow one to insert and delete elements at any
place in the list i.e., at the beginning, at the end or in the middle.

4.1. STACK:

A stack is a list of elements in which an element may be inserted or deleted only at one
end, called the top of the stack. Stacks are sometimes known as LIFO (last in, first out)
lists.

As the items can be added or removed only from the top i.e. the last item to be added
to a stack is the first item to be removed.

The two basic operations associated with stacks are:

Push: is the term used to insert an element into a stack.


Pop: is the term used to delete an element from a stack.

delete an element from the stack.

All insertions and deletions take place at the same end, so the last element added to
the stack will be the first element removed from the stack. When a stack is created, the
stack base remains fixed while the stack top changes as elements are added and
removed. The most accessible element is the top and the least accessible element is
the bottom of the stack.

4.1.1. Representation of Stack:

Let us consider a stack with 6 elements capacity. This is called as the size of the stack.
The number of elements to be added should not exceed the maximum size of the stack.
If we attempt to add new element beyond the maximum size, we will encounter a stack
overflow condition. Similarly, you cannot remove elements beyond the base of the
stack. If such is the case, we will reach a stack underflow condition.

When an element is added to a stack, the operation is performed by push(). Figure 4.1
shows the creation of a stack and addition of elements using push().
4 4 4 4

3 3 3 3
TOP
2 2 TOP 2 2

1 TOP 1 22 1 1
11 11
TOP 0 0 0 0
Empty
Stack 11

Figure 4.1. Push operations on stack

When an element is taken off from the stack, the operation is performed by pop().
Figure 4.2 shows a stack initially with three elements and shows the deletion of
elements using pop().

4 4 4 4

TOP 3 3 3 3
33
2 TOP 2 2 2
22 22
1 1 TOP 1 1
11 11 11 TOP
0 0 0 0
POP POP

Empty
Stack
Figure 4.2. Pop operations on stack

4.1.2. Source code for stack operations, using array:

# include <stdio.h>
# include <conio.h>
# include <stdlib.h>
# define MAX 6
int stack[MAX];
int top = 0;
int menu()
{
int ch;
clrscr();
printf("\
printf("\n -----------**********-------------\n");
printf("\n 1. Push ");
printf("\n 2. Pop ");
printf("\n 3. Display");
printf("\n 4. Quit ");
printf("\n Enter your choice: ");
scanf("%d", &ch);
return ch;
}
void display()
{
int i;
if(top == 0)
{
printf("\n\nStack empty..");
return;
}
else
{
printf("\n\nElements in stack:");
for(i = 0; i < top; i++)
printf("\t%d", stack[i]);
}
}

void pop()
{
if(top == 0)
{
printf("\n\nStack Underflow..");
return;
}
else
printf("\n\npopped element is: %d ", stack[--top]);
}

void push()
{
int data;
if(top == MAX)
{
printf("\n\nStack Overflow..");
return;
}
else
{
printf("\n\nEnter data: ");
scanf("%d", &data);
stack[top] = data;
top = top + 1;
printf("\n\nData Pushed into the stack");
}
}

void main()
{
int ch;
do
{
ch = menu();
switch(ch)
{
case 1:
push();
break;
case 2:
pop();
break;
case 3:
display();
break;

case 4:
exit(0);
}
getch();
} while(1);
}
4.1.3. Linked List Implementation of Stack:

We can represent a stack as a linked list. In a stack push and pop operations are
performed at one end called top. We can perform similar operations at one end of list
using top pointer. The linked stack looks as shown in figure 4.3.

top
400
data next
40 X
400

30 400
300

20 300

start
100 10 200

Figure 4.3. Linked stack


representation

4.1.4. Source code for stack operations, using linked list:

# include <stdio.h>
# include <conio.h>
# include <stdlib.h>

struct stack
{
int data;
struct stack *next;
};

void push();
void pop();
void display();
typedef struct stack node;
node *start=NULL;
node *top = NULL;

node* getnode()
{
node *temp;
temp=(node *) malloc( sizeof(node)) ;
printf("\n Enter data ");
scanf("%d", &temp -> data);
temp -> next = NULL;
return temp;
}
void push(node *newnode)
{
node *temp;
if( newnode == NULL )
{
printf("\n Stack Overflow..");
return;
}
if(start == NULL)
{
start = newnode;
top = newnode;
}
else
{
temp = start;
while( temp -> next != NULL)
temp = temp -> next;
temp -> next = newnode;
top = newnode;
}
printf("\n\n\t Data pushed into stack");
}
void pop()
{
node *temp;
if(top == NULL)
{
printf("\n\n\t Stack underflow");
return;
}
temp = start;
if( start -> next == NULL)
{
printf("\n\n\t Popped element is %d ", top -> data);
start = NULL;
free(top);
top = NULL;
}
else
{
while(temp -> next != top)
{
temp = temp -> next;
}
temp -> next = NULL;
printf("\n\n\t Popped element is %d ", top -> data);
free(top);
top = temp;
}
}
void display()
{
node *temp;
if(top == NULL)
{
printf("\n\n\t\t Stack is empty ");
}
else
{
temp = start;
printf("\n\n\n\t\t Elements in the stack: \n");
printf("%5d ", temp -> data);
while(temp != top)
{
temp = temp -> next;
printf("%5d ", temp -> data);
}
}
}
char menu()
{
char ch;
clrscr();
printf("\n \tStack operations using pointers.. ");
printf("\n -----------**********-------------\n");
printf("\n 1. Push ");
printf("\n 2. Pop ");
printf("\n 3. Display");
printf("\n 4. Quit ");
printf("\n Enter your choice: ");
ch = getche();
return ch;
}

void main()
{
char ch;
node *newnode;
do
{
ch = menu();
switch(ch)
{
case '1' :
newnode = getnode();
push(newnode);
break;
case '2' :
pop();
break;
case '3' :
display();
break;
case '4':
return;
}
getch();
} while( ch != '4' );
}

4.2. Algebraic Expressions:

An algebraic expression is a legal combination of operators and operands. Operand is


the quantity on which a mathematical operation is performed. Operand may be a
variable like x, y, z or a constant like 5, 4, 6 etc. Operator is a symbol which signifies a
mathematical or logical operation between the operands. Examples of familiar
operators include +, -, *, /, ^ etc.

An algebraic expression can be represented using three different notations. They are
infix, postfix and prefix notations:

Infix: It is the form of an arithmetic expression in which we fix (place) the


arithmetic operator in between the two operands.

Example: (A + B) * (C - D)

Prefix: It is the form of an arithmetic notation in which we fix (place) the arithmetic
operator before (pre) its two operands. The prefix notation is called as
polish notation (due to the polish mathematician Jan Lukasiewicz in the
year 1920).

Example: * + A B CD

Postfix: It is the form of an arithmetic expression in which we fix (place) the


arithmetic operator after (post) its two operands. The postfix notation is
called as suffix notation and is also referred to reverse polish notation.

Example: A B + C D - *

The three important features of postfix expression are:

1. The operands maintain the same order as in the equivalent infix expression.

2. The parentheses are not needed to designate the expression un-


ambiguously.

3. While evaluating the postfix expression the priority of the operators is no


longer relevant.

We consider five binary operations: +, -, *, / and $ or (exponentiation). For these


binary operations, the following in the order of precedence (highest to lowest):

OPERATOR

Exponentiation ($ or or ^)

*, /

+, -

4.3. Converting expressions using Stack:

Let us convert the expressions from one type to another. These can be done as follows:

1. Infix to postfix
2. Infix to prefix
3. Postfix to infix
4. Postfix to prefix
5. Prefix to infix
6. Prefix to postfix

4.3.1. Conversion from infix to postfix:

Procedure to convert from infix expression to postfix expression is as follows:

1. Scan the infix expression from left to right.

2. a)If the scanned symbol is left parenthesis, push it onto the stack.

b) If the scanned symbol is an operand, then place directly in the postfix


expression (output).
c) If the symbol scanned is a right parenthesis, then go on popping all
the items from the stack and place them in the postfix expression till
we get the matching left parenthesis.

d) If the scanned symbol is an operator, then go on removing all the


operators from the stack and place them in the postfix expression, if
and only if the precedence of the operator which is on the top of the
stack is greater than (or greater than or equal) to the precedence of
the scanned operator and push the scanned operator onto the stack
otherwise, push the scanned operator onto the stack.

Example 1:

Convert ((A (B + C)) * D) (E + F) infix expression to postfix form:

POSTFIX STRING STACK REMARKS


( (
( ((
A A ((
- A ((-
( A ((-(
B AB ((-(
AB ((-(+
C ABC ((-(+
) ABC+ ((-
) ABC+- (
ABC+- (*
D ABC+-D (*
) ABC+-D*
ABC+-D*
( ABC+-D* (
ABC+-D*E (
ABC+-D*E (+
ABC+-D*EF (+
) ABC+-D*EF+
End of The input is now empty. Pop the output symbols
string ABC+-D*EF+ from the stack until it is empty.

Example 2:

Convert a + b * c + (d * e + f) * g the infix expression into postfix form.

SYMBOL POSTFIX STRING STACK REMARKS

a a

+ a +

b ab +
* ab +*

c abc +*

+ abc*+ +

( abc*+ +(

d abc*+d +(

* abc*+d +(*

e abc*+de +(*

+ abc*+de* +(+

f abc*+de*f +(+

) abc*+de*f+ +

* abc*+de*f+ +*

g abc*+de*f+g +*
End of The input is now empty. Pop the output symbols
string abc*+de*f+g*+ from the stack until it is empty.

Example 3:

Convert the following infix expression A + B * C D / E * H into its equivalent postfix


expression.

SYMBOL POSTFIX STRING STACK REMARKS


A A
+ A +
B AB +
* AB +*
C ABC +*
- ABC*+ -
D ABC*+D -
/ ABC*+D -/
E ABC*+DE -/
* ABC*+DE/ -*
H ABC*+DE/H -*
End of The input is now empty. Pop the output symbols from
string ABC*+DE/H*- the stack until it is empty.

Example 4:

Convert the following infix expression A + (B * C (D / E F) * G) * H into its


equivalent postfix expression.

SYMBOL POSTFIX STRING STACK REMARKS


A A
+ A +
( A +(
B AB +(
* AB +(*
C ABC +(*
- ABC* +(-
( ABC* +(-(
D ABC*D +(-(
/ ABC*D +(-(/
E ABC*DE +(-(/
ABC*DE +(-(/
F ABC*DEF +(-(/
) ABC*DEF / +(-
* ABC*DEF / +(-*
G ABC*DEF /G +(-*
) ABC*DEF /G*- +
* ABC*DEF /G*- +*
H ABC*DEF /G*-H +*
End of ABC*DEF / G * - H * + The input is now empty. Pop the output
string symbols from the stack until it is empty.

4.3.2. Program to convert an infix to postfix expression:

# include <string.h>

char postfix[50];
char infix[50];
char opstack[50]; /* operator stack */ int i, j, top =
0;

int lesspriority(char op, char op_at_stack)


{
int k;
int pv1; /* priority value of op */
int pv2; /* priority value of op_at_stack */
char operators[] = {'+', '-', '*', '/', '%', '^', '(' };
int priority_value[] = {0,0,1,1,2,3,4};
if( op_at_stack == '(' )
return 0;
for(k = 0; k < 6; k ++)
{
if(op == operators[k])
pv1 = priority_value[k];
}
for(k = 0; k < 6; k ++)
{
if(op_at_stack == operators[k])
pv2 = priority_value[k];
}
if(pv1 < pv2)
return 1;
else
return 0;
}
void push(char op) /* op - operator */
{
if(top == 0) /* before pushing the operator
{ 'op' into the stack check priority
opstack[top] = op; of op with top of opstack if less
top++; then pop the operator from stack
} then push into postfix string else
else push op onto stack itself */
{
if(op != '(' )
{
while(lesspriority(op, opstack[top-1]) == 1 && top > 0)
{
postfix[j] = opstack[--top];
j++;
}
}
opstack[top] = op; /* pushing onto stack */
top++;
}
}

pop()
{
while(opstack[--top] != '(' ) /* pop until '(' comes */
{
postfix[j] = opstack[top];
j++;
}
}

void main()
{
char ch;
clrscr();
printf("\n Enter Infix Expression : ");
gets(infix);
\
{
switch(ch)
{
case ' ' : break;
case '(' :
case '+' :
case '-' :
case '*' :
case '/' :
case '^' :
case '%' :
push(ch); /* check priority and push */ break;

case ')' :
pop();
break;
default :
postfix[j] = ch;
j++;
}
}
while(top >= 0)
{
postfix[j] = opstack[--top];
j++;
}
postfix[j] = '\0';
printf("\n Infix Expression : %s ", infix);
printf("\n Postfix Expression : %s ", postfix);
getch();
}

4.3.3. Conversion from infix to prefix:

The precedence rules for converting an expression from infix to prefix are identical. The
only change from postfix conversion is that traverse the expression from right to left
and the operator is placed before the operands rather than after them. The prefix form
of a complex expression is not the mirror image of the postfix form.

Example 1:

Convert the infix expression A + B - C into prefix expression.

PREFIX
STACK REMARKS
STRING
C
- C -
BC -
+ BC -+
ABC -+
End of - + A B C The input is now empty. Pop the output symbols from the
string stack until it is empty.

Example 2:

Convert the infix expression (A + B) * (C - D) into prefix expression.

PREFIX
STACK REMARKS
STRING
)
D D )
D )-
CD )-
-CD
* -CD *
-CD *)
B-CD *)
B-CD *)+
AB-CD *)+
+AB CD *
End of *+AB C D The input is now empty. Pop the output symbols from the
string stack until it is empty.
Example 3:

Convert the infix expression A B*C D + E / F / (G + H) into prefix expression.

SYMBOL PREFIX STRING STACK REMARKS

H H )

H )+

GH )+

+GH

+GH /

F+GH /

F+GH //

E EF+GH //

//EF+GH +

D D//EF+GH +

D//EF+GH +-

CD//EF+GH +-

* CD//EF+GH +-*

BCD//EF+GH +-*

BCD//EF+GH +-*

ABCD//EF+GH +-*
End of +-* A B C D / / E F + G H The input is now empty. Pop the output
string symbols from the stack until it is empty.

4.3.4. Program to convert an infix to prefix expression:

# include <conio.h>
# include <string.h>

char prefix[50];
char infix[50];
char opstack[50]; /* operator stack */ int j, top = 0;

void insert_beg(char ch)


{
int k;
if(j == 0)
prefix[0] = ch;
else
{
for(k = j + 1; k > 0; k--)
prefix[k] = prefix[k - 1];
prefix[0] = ch;
}
j++;
}
int lesspriority(char op, char op_at_stack)
{
int k;
int pv1; /* priority value of op */
int pv2; /* priority value of op_at_stack */
char operators[] = {'+', '-', '*', '/', '%', '^', ')'};
int priority_value[] = {0, 0, 1, 1, 2, 3, 4};
if(op_at_stack == ')' )
return 0;
for(k = 0; k < 6; k ++)
{
if(op == operators[k])
pv1 = priority_value[k];
}
for(k = 0; k < 6; k ++)
{
if( op_at_stack == operators[k] )
pv2 = priority_value[k];
}
if(pv1 < pv2)
return 1;
else
return 0;
}

void push(char op) /* op operator */


{
if(top == 0)
{
opstack[top] = op;
top++;
}
else
{
if(op != ')')
{
/* before pushing the operator 'op' into the stack check priority of op with
top of operator stack if less pop the operator from stack then push into postfix
string else push op onto stack itself */

while(lesspriority(op, opstack[top-1]) == 1 && top > 0)


{
insert_beg(opstack[--top]);
}
}
opstack[top] = op; /* pushing onto stack */
top++;
}
}

void pop()
{
while(opstack[--top] != ')') /* pop until ')' comes; */
insert_beg(opstack[top]);
}

void main()
{
char ch;
int l, i = 0;
clrscr();
printf("\n Enter Infix Expression : ");
gets(infix);
l = strlen(infix);
while(l > 0)
{
ch = infix[--l];
switch(ch)
{
case ' ' : break;
case ')' :
case '+' :
case '-' :
case '*' :
case '/' :
case '^' :
case '%' :
push(ch); /* check priority and push */ break;

case '(' :
pop();
break;
default :
insert_beg(ch);
}
}
while( top > 0 )
{
insert_beg( opstack[--top] );
j++;
}
prefix[j] = '\0';
printf("\n Infix Expression : %s ", infix);
printf("\n Prefix Expression : %s ", prefix);
getch();
}

4.3.5. Conversion from postfix to infix:

Procedure to convert postfix expression to infix expression is as follows:

1. Scan the postfix expression from left to right.

2. If the scanned symbol is an operand, then push it onto the stack.

3. If the scanned symbol is an operator, pop two symbols from the stack
and create it as a string by placing the operator in between the operands
and push it onto the stack.

4. Repeat steps 2 and 3 till the end of the expression.

Example:

Convert the following postfix expression A B C * D E F ^ / G * - H * + into its


equivalent infix expression.
Symbol Stack Remarks

A A Push A

B A B Push B

C A B C Push C
Pop two operands and place the
* A (B*C) operator in between the operands and
push the string.

D A (B*C) D Push D

E A (B*C) D E Push E

F A (B*C) D E F Push F
Pop two operands and place the
^ A (B*C) D (E^F) operator in between the operands and
push the string.
Pop two operands and place the
/ A (B*C) (D/(E^F)) operator in between the operands and
push the string.

G A (B*C) (D/(E^F)) G Push G


Pop two operands and place the
* A (B*C) ((D/(E^F))*G) operator in between the operands and
push the string.
Pop two operands and place the
- A ((B*C) ((D/(E^F))*G)) operator in between the operands and
push the string.

H A ((B*C) ((D/(E^F))*G)) H Push H


Pop two operands and place the
* A (((B*C) ((D/(E^F))*G)) * H) (A operator in between the operands and
push the string.
+ (((B*C) ((D/(E^F))*G)) * H))
+

End of
The input is now empty. The string formed is infix.
string

4.3.6. Program to convert postfix to infix expression:

# include <stdio.h>
# include <conio.h>
# include <string.h>
# define MAX 100

void pop (char*);


void push(char*);

char stack[MAX] [MAX];


int top = -1;
void main()
{
char s[MAX], str1[MAX], str2[MAX], str[MAX];
char s1[2],temp[2];
int i=0;
clrscr( ) ;
printf("\Enter the postfix expression; ");
gets(s);
while (s[i]!='\0')
{
if(s[i] == ' ' ) /*skip whitespace, if any*/
i++;
if (s[i] == '^' || s[i] == '*'|| s[i] == '-' || s[i] == '+' || s[i] == '/')
{
pop(str1);
pop(str2);
temp[0] ='(';
temp[1] ='\0';
strcpy(str, temp);
strcat(str, str2);
temp[0] = s[i];
temp[1] = '\0';
strcat(str,temp);
strcat(str, str1);
temp[0] =')';
temp[1] ='\0';
strcat(str,temp);
push(str);
}
else
{
temp[0]=s[i];
temp[1]='\0';
strcpy(s1, temp);
push(s1);
}
i++;
}
printf("\nThe Infix expression is: %s", stack[0]);

void pop(char *a1)


{
strcpy(a1,stack[top]);
top--;
}

void push (char*str)


{
if(top == MAX - 1)
printf("\nstack is full");
else
{
top++;
strcpy(stack[top], str);
}
}
4.3.7. Conversion from postfix to prefix:

Procedure to convert postfix expression to prefix expression is as follows:

1. Scan the postfix expression from left to right.

2. If the scanned symbol is an operand, then push it onto the stack.

3. If the scanned symbol is an operator, pop two symbols from the stack
and create it as a string by placing the operator in front of the operands
and push it onto the stack.

5. Repeat steps 2 and 3 till the end of the expression.

Example:

Convert the following postfix expression A B C * D E F ^ / G * - H * + into its


equivalent prefix expression.

Symbol Stack Remarks

A A Push A

B A B Push B

C A B C Push C

Pop two operands and place the operator


* A *BC
in front the operands and push the string.

D A *BC D Push D

E A *BC D E Push E

F A *BC D E F Push F

Pop two operands and place the operator


^ A *BC D ^EF
in front the operands and push the string.
Pop two operands and place the operator
/ A *BC /D^EF
in front the operands and push the string.

G A *BC /D^EF G Push G

Pop two operands and place the operator


* A *BC
in front the operands and push the string.
Pop two operands and place the operator
- A - *BC*/D^EFG
in front the operands and push the string.

H A - *BC*/D^EFG H Push H

Pop two operands and place the operator


* A *- *BC*/D^EFGH
in front the operands and push the string.

+ +A*-*BC*/D^EFGH

End of
The input is now empty. The string formed is prefix.
string
4.3.8. Program to convert postfix to prefix expression:

# include <conio.h>
# include <string.h>

#define MAX 100


void pop (char *a1);
void push(char *str);
char stack[MAX][MAX];
int top =-1;

main()
{
char s[MAX], str1[MAX], str2[MAX], str[MAX];
char s1[2], temp[2];
int i = 0;
clrscr();
printf("Enter the postfix expression; ");
gets (s);
while(s[i]!='\0')
{
/*skip whitespace, if any */
if (s[i] == ' ')
i++;
if(s[i] == '^' || s[i] == '*' || s[i] == '-' || s[i]== '+' || s[i] == '/')
{
pop (str1);
pop (str2);
temp[0] = s[i];
temp[1] = '\0';
strcpy (str, temp);
strcat(str, str2);
strcat(str, str1);
push(str);
}
else
{
temp[0] = s[i];
temp[1] = '\0';
strcpy (s1, temp);
push (s1);
}
i++;
}
printf("\n The prefix expression is: %s", stack[0]);
}

void pop(char*a1)
{
if(top == -1)
{
printf("\nStack is empty");
return ;
}
else
{
strcpy (a1, stack[top]);
top--;
}
}
void push (char *str)
{
if(top == MAX - 1)
printf("\nstack is full");
else
{
top++;
strcpy(stack[top], str);
}
}

4.3.9. Conversion from prefix to infix:

Procedure to convert prefix expression to infix expression is as follows:

1. Scan the prefix expression from right to left (reverse order).


2. If the scanned symbol is an operand, then push it onto the stack.
3. If the scanned symbol is an operator, pop two symbols from the stack
and create it as a string by placing the operator in between the operands
and push it onto the stack.

4. Repeat steps 2 and 3 till the end of the expression.

Example:

Convert the following prefix expression + A * - * B C * / D ^ E F G H into its equivalent


infix expression.

Symbol Stack Remarks

H H Push H

G H G Push G

F H G F Push F

E H G F E Push E

^ H G (E^F) in between the operands and push the


string.
D H G (E^F) D Push D

/ H G (D/(E^F)) in between the operands and push the


string.

* H ((D/(E^F))*G) in between the operands and push the


string.
C H ((D/(E^F))*G) C Push C

B H ((D/(E^F))*G) C B Push B
Pop two operands and place the
* H ((D/(E^F))*G) (B*C) operator in front the operands and push
the string.
Pop two operands and place the operator
- H ((B*C)-((D/(E^F))*G))
in front the operands and push the
string.

Pop two operands and place the


* (((B*C)-((D/(E^F))*G))*H) operator in front the operands and push
the string.
A (((B*C)-((D/(E^F))*G))*H) Push A
A
Pop two operands and place the
+ operator in front the operands and push
(A+(((B*C)-((D/(E^F))*G))*H)) the string.
End of
The input is now empty. The string formed is infix.
string

4.3.10. Program to convert prefix to infix expression:

# include <string.h>
# define MAX 100

void pop (char*);


void push(char*);
char stack[MAX] [MAX];
int top = -1;

void main()
{
char s[MAX], str1[MAX], str2[MAX], str[MAX];
char s1[2],temp[2];
int i=0;
clrscr( ) ;
printf("\Enter the prefix expression; ");
gets(s);
strrev(s);
while (s[i]!='\0')
{
/*skip whitespace, if any*/
if(s[i] == ' ' )
i++;
if (s[i] == '^' || s[i] == '*'|| s[i] == '-' || s[i] == '+' || s[i] == '/')
{
pop(str1);
pop(str2);
temp[0] ='(';
temp[1] ='\0';
strcpy(str, temp);
strcat(str, str1);
temp[0] = s[i];
temp[1] = '\0';
strcat(str,temp);
strcat(str, str2);
temp[0] =')';
temp[1] ='\0';
strcat(str,temp);
push(str);
}
else
{
temp[0]=s[i];
temp[1]='\0';
strcpy(s1, temp);
push(s1);
}
i++;
}
printf("\nThe infix expression is: %s", stack[0]);
}

void pop(char *a1)


{
strcpy(a1,stack[top]);
top--;
}

void push (char*str)


{
if(top == MAX - 1)
printf("\nstack is full");
else
{
top++;
strcpy(stack[top], str);
}
}

4.3.11. Conversion from prefix to postfix:

Procedure to convert prefix expression to postfix expression is as follows:

1. Scan the prefix expression from right to left (reverse order).

2. If the scanned symbol is an operand, then push it onto the stack.

3. If the scanned symbol is an operator, pop two symbols from the stack
and create it as a string by placing the operator after the operands and
push it onto the stack.

4. Repeat steps 2 and 3 till the end of the expression.

Example:

Convert the following prefix expression + A * - * B C * / D ^ E F G H into its equivalent


postfix expression.

Symbol Stack Remarks

H H Push H

G H G Push G

F H G F Push F

E H G F E Push E

Pop two operands and place the operator


^ H G EF^
after the operands and push the string.

D H G EF^ D Push D
Pop two operands and place the operator
/ H G DEF^/
after the operands and push the string.
Pop two operands and place the operator
* H DEF^/G*
after the operands and push the string.

C H DEF^/G* C Push C

B H DEF^/G* C B Push B

Pop two operands and place the operator


* H DEF^/G* BC*
after the operands and push the string.
Pop two operands and place the operator
- H BC*DEF^/G*-
after the operands and push the string.
Pop two operands and place the operator
* BC*DEF^/G*-H* after the operands and push the string.

A BC*DEF^/G*-H* A Push A

ABC*DEF^/G*-H*+ Pop two operands and place the operator


+
after the operands and push the string.
End of
The input is now empty. The string formed is postfix.
string

4.3.12. Program to convert prefix to postfix expression:

# include <stdio.h>
# include <conio.h>
# include <string.h>

#define MAX 100

void pop (char *a1);


void push(char *str);
char stack[MAX][MAX];
int top =-1;

void main()
{
char s[MAX], str1[MAX], str2[MAX], str[MAX];
char s1[2], temp[2];
int i = 0;
clrscr();
printf("Enter the prefix expression; ");
gets (s);
strrev(s);
while(s[i]!='\0')
{
if (s[i] == ' ') /*skip whitespace, if any */
i++;
if(s[i] == '^' || s[i] == '*' || s[i] == '-' || s[i]== '+' || s[i] == '/')
{
pop (str1);
pop (str2);
temp[0] = s[i];
temp[1] = '\0';
strcat(str1,str2);
strcat (str1, temp);
strcpy(str, str1);
push(str);
}
else
{
temp[0] = s[i];
temp[1] = '\0';
strcpy (s1, temp);
push (s1);
}
i++;
}
printf("\nThe postfix expression is: %s", stack[0]);
}
void pop(char*a1)
{
if(top == -1)
{
printf("\nStack is empty");
return ;
}
else
{
strcpy (a1, stack[top]);
top--;
}
}
void push (char *str)
{
if(top == MAX - 1)
printf("\nstack is full");
else
{
top++;
strcpy(stack[top], str);
}
}

4.4. Evaluation of postfix expression:

The postfix expression is evaluated easily by the use of a stack. When a number is
seen, it is pushed onto the stack; when an operator is seen, the operator is applied to
the two numbers that are popped from the stack and the result is pushed onto the
stack. When an expression is given in postfix notation, there is no need to know any
precedence rules; this is our obvious advantage.

Example 1:

Evaluate the postfix expression: 6 5 2 3 + 8 * + 3 + *

SYMBOL VALUE STACK REMARKS


1
6 6

5 6, 5

2 6, 5, 2
The first four symbols are placed on
3 6, 5, 2, 3
the stack.

+ 2 3 5 6, 5, 5 popped from the stack and their


sum 5, is pushed
8 2 3 5 6, 5, 5, 8 Next 8 is pushed

* 5 8 40 6, 5, 40
popped as 8 * 5 = 40 is pushed
seen, so 40 and 5 are
+ 5 45 6, 45
popped and 40 + 5 = 45 is pushed
3 5 45 6, 45, 3 Now, 3 is pushed

+ 45 3 48 6, 48
45 + 3 = 48 is pushed

* 6 288 are popped, the result 6 * 48 =


288 is pushed

Example 2:

Evaluate the following postfix expression: 6 2 3 + - 3 8 2 / + * 2 3+

SYMBOL OPERAND 1 VALUE STACK

6 6

2 6, 2

3 6, 2, 3

+ 2 3 5 6, 5

6 5 1 1

3 6 5 1 1, 3

8 6 5 1 1, 3, 8

2 6 5 1 1, 3, 8, 2

8 2 4 1, 3, 4

+ 3 4 7 1, 7

* 1 7 7 7

2 1 7 7 7, 2

7 2 49

3 7 2 49, 3

+ 49 3 52

4.4.1. Program to evaluate a postfix expression:

# include <conio.h>
# include <math.h>
# define MAX 20

int isoperator(char ch)


{
if(ch == '+' || ch == '-' || ch == '*' || ch == '/' || ch == '^')
return 1;
else
return 0;
}
void main(void)
{
char postfix[MAX];
int val;
char ch;
int i = 0, top = 0;
float val_stack[MAX], val1, val2, res;
clrscr();
printf("\n Enter a postfix expression: ");
scanf("%s", postfix);
while((ch = postfix[i]) != '\0')
{
if(isoperator(ch) == 1)
{
val2 = val_stack[--top];
val1 = val_stack[--top];
switch(ch)
{
case '+':
res = val1 + val2;
break;
case '-':
res = val1 - val2;
break;
case '*':
res = val1 * val2;
break;
case '/':
res = val1 / val2;
break;
case '^':
res = pow(val1, val2);
break;
}
val_stack[top] = res;
}
else
val_stack[top] = ch-48; /*convert character digit to integer digit */
top++;
i++;
}
printf("\n Values of %s is : %f ",postfix, val_stack[0] );
getch();
}

4.5. Applications of stacks:

1. Stack is used by compilers to check for balancing of parentheses, brackets


and braces.

2. Stack is used to evaluate a postfix expression.

3. Stack is used to convert an infix expression into postfix/prefix form.

4. In recursion, all intermediate arguments and return values are stored on the

5. During a function call the return address and arguments are pushed onto a
stack and on return they are popped off.
4.6. Queue:

A queue is another special kind of list, where items are inserted at one end called the
rear and deleted at the other end called the front. Another name for a queue is a
-in-first-

The operations for a queue are analogues to those for a stack, the difference is that the
insertions go at the end of the list, rather than the beginning. We shall use the
following operations on queues:

enqueue: which inserts an element at the end of the queue.


dequeue: which deletes an element at the start of the queue.

4.6.1. Representation of Queue:

Let us consider a queue, which can hold maximum of five elements. Initially the queue
is empty.

0 1 2 3 4
Que u e E mpt y
F RO NT = REA R = 0

FR

Now, insert 11 to the queue. Then queue status will be:

0 1 2 3 4
REA R = REA R + 1 = 1
11
F RO NT = 0

F R

Next, insert 22 to the queue. Then the queue status is:

0 1 2 3 4
REA R = REA R + 1 = 2
11 22
F RO NT = 0

F R

Again insert another element 33 to the queue. The status of the queue is:

0 1 2 3 4
REA R = REA R + 1 = 3
11 22 33
F RO NT = 0

F R
Now, delete an element. The element deleted is the element at the front of the queue.
So the status of the queue is:

0 1 2 3 4
REA R = 3
22 33

F R

Again, delete an element. The element to be deleted is always pointed to by the FRONT
pointer. So, 22 is deleted. The queue status is as follows:

0 1 2 3 4
REA R = 3
33

F R

Now, insert new elements 44 and 55 into the queue. The queue status is:

0 1 2 3 4
REA R = 5
33 44 55

F R

Next insert another element, say 66 to the queue. We cannot insert 66 to the queue as
the rear crossed the maximum size of the queue (i.e., 5). There will be queue full
signal. The queue status is as follows:

0 1 2 3 4
REA R = 5
33 44 55

F R

Now it is not possible to insert an element 66 even though there are two vacant
positions in the linear queue. To over come this problem the elements of the queue are
to be shifted towards the beginning of the queue so that it creates vacant position at
the rear end. Then the FRONT and REAR are to be adjusted properly. The element 66
can be inserted at the rear end. After this operation, the queue status is as follows:

0 1 2 3 4
REA R = 4
33 44 55 66

F R

This difficulty can overcome if we treat queue position with index 0 as a position that
comes after position with index 4 i.e., we treat the queue as a circular queue.
4.6.2. Source code for Queue operations using array:

In order to create a queue we require a one dimensional array Q(1:n) and two
variables front and rear. The conventions we shall adopt for these two variables are
that front is always 1 less than the actual front of the queue and rear always points to
the last element in the queue. Thus, front = rear if and only if there are no elements in
the queue. The initial condition then is front = rear = 0. The various queue operations
to perform creation, deletion and display the elements in a queue are as follows:

1. insertQ(): inserts an element at the end of queue Q.


2. deleteQ(): deletes the first element of Q.
3. displayQ(): displays the elements in the queue.

# include <conio.h>
# define MAX 6
int Q[MAX];
int front, rear;

void insertQ()
{
int data;
if(rear == MAX)
{
printf("\n Linear Queue is full");
return;
}
else
{
printf("\n Enter data: ");
scanf("%d", &data);
Q[rear] = data;
rear++;
printf("\n Data Inserted in the Queue ");
}
}
void deleteQ()
{
if(rear == front)
{
printf("\n\n Queue is Empty..");
return;
}
else
{
printf("\n Deleted element from Queue is %d",
Q[front]); front++;
}
}
void displayQ()
{
int i;
if(front == rear)
{
printf("\n\n\t Queue is Empty");
return;
}
else
{
printf("\n Elements in Queue are: ");
for(i = front; i < rear; i++)
{
printf("%d\t", Q[i]);
}
}
}
int menu()
{
int ch;
clrscr();
printf("\n \tQueue operations using ARRAY..");
printf("\n -----------**********-------------\n");
printf("\n 1. Insert ");
printf("\n 2. Delete ");
printf("\n 3. Display");
printf("\n 4. Quit ");
printf("\n Enter your choice: ");
scanf("%d", &ch);
return ch;
}
void main()
{
int ch;
do
{
ch = menu();
switch(ch)
{
case 1:
insertQ();
break;
case 2:
deleteQ();
break;
case 3:
displayQ();
break;
case 4:
return;
}
getch();
} while(1);
}

4.6.3. Linked List Implementation of Queue:

We can represent a queue as a linked list. In a queue data is deleted from the front end
and inserted at the rear end. We can perform similar operations on the two ends of a
list. We use two pointers front and rear for our linked queue implementation.

The linked queue looks as shown in figure 4.4:

100

10 200 20 300 30 40 X
100 200 300 400
4.6.4. Source code for queue operations using linked list:

# include <stdlib.h>
# include <conio.h>

struct queue
{
int data;
struct queue *next;
};
typedef struct queue node;
node *front = NULL;
node *rear = NULL;

node* getnode()
{
node *temp;
temp = (node *) malloc(sizeof(node)) ;
printf("\n Enter data ");
scanf("%d", &temp -> data);
temp -> next = NULL;
return temp;
}
void insertQ()
{
node *newnode;
newnode = getnode();
if(newnode == NULL)
{
printf("\n Queue Full");
return;
}
if(front == NULL)
{
front = newnode;
rear = newnode;
}
else
{
rear -> next = newnode;
rear = newnode;
}
printf("\n\n\t Data Inserted into the Queue..");
}
void deleteQ()
{
node *temp;
if(front == NULL)
{
printf("\n\n\t Empty Queue..");
return;
}
temp = front;
front = front -> next;
printf("\n\n\t Deleted element from queue is %d ", temp ->
data); free(temp);
}
void displayQ()
{
node *temp;
if(front == NULL)
{
printf("\n\n\t\t Empty Queue ");
}
else
{
temp = front;
printf("\n\n\n\t\t Elements in the Queue are: ");
while(temp != NULL )
{
printf("%5d ", temp -> data);
temp = temp -> next;
}
}
}

char menu()
{
char ch;
clrscr();
printf("\n \t..Queue operations using pointers.. ");
printf("\n\t -----------**********-------------
\n"); printf("\n 1. Insert ");
printf("\n 2. Delete ");
printf("\n 3. Display");
printf("\n 4. Quit ");
printf("\n Enter your choice: ");
ch = getche();
return ch;
}

void main()
{
char ch;
do
{
ch = menu();
switch(ch)
{
case '1' :
insertQ();
break;
case '2' :
deleteQ();
break;
case '3' :
displayQ();
break;
case '4':
return;
}
getch();
} while(ch != '4');
}
4.7. Applications of Queue:

1. It is used to schedule the jobs to be processed by the CPU.

2. When multiple users send print jobs to a printer, each printing job is kept in
the printing queue. Then the printer prints those jobs according to first in
first out (FIFO) basis.

3. Breadth first search uses a queue data structure to find an element from a
graph.

4.8. Circular Queue:

A more efficient queue representation is obtained by regarding the array Q[MAX] as


circular. Any number of items could be placed on the queue. This implementation of a
queue is called a circular queue because it uses its storage array as if it were a circle
instead of a linear list.

There are two problems associated with linear queue. They are:

Time consuming: linear time to be spent in shifting the elements to the


beginning of the queue.

Signaling queue full: even if the queue is having vacant position.

For example, let us consider a linear queue status as follows:

0 1 2 3 4
REA R = 5
33 44 55

F R

Next insert another element, say 66 to the queue. We cannot insert 66 to the queue as
the rear crossed the maximum size of the queue (i.e., 5). There will be queue full
signal. The queue status is as follows:

0 1 2 3 4
REA R = 5
33 44 55

F R

This difficulty can be overcome if we treat queue position with index zero as a position
that comes after position with index four then we treat the queue as a circular queue.

In circular queue if we reach the end for inserting elements to it, it is possible to insert
new elements if the slots at the beginning of the circular queue are empty.
4.8.1. Representation of Circular Queue:

Let us consider a circular queue, which can hold maximum (MAX) of six elements.
Initially the queue is empty.

F R

1
MAX=6
F RO NT = REA R = 0
CO U NT = 0

2
3
Circ ular Que ue

Now, insert 11 to the circular queue. Then circular queue status will be:

5 0
R
11

1 F RO NT = 0
REA R = ( REA R + 1) % 6 = 1
CO U NT = 1

2
3
Circ ular Que ue

Insert new elements 22, 33, 44 and 55 into the circular queue. The circular queue
status is:
F
R
0
5
11

22 1 FRONT = 0
55
COUNT = 5

44 33
2
3 Circular

Queue
Now, delete an element. The element deleted is the element at the front of the circular
queue. So, 11 is deleted. The circular queue status is as follows:

R
0
5
F

22 1 F RO NT = (F R O NT + 1) % 6 = 1
55 REA R = 5
CO U NT = CO U NT - 1 = 4

44 33

2
3
Circ ular Que ue

Again, delete an element. The element to be deleted is always pointed to by the FRONT
pointer. So, 22 is deleted. The circular queue status is as follows:

0
5

1 F RO NT = (F R O NT + 1) % 6 = 2
55 REA R = 5
CO U NT = CO U NT - 1 = 3

44 33
F
3 2

Circ ular Que ue

Again, insert another element 66 to the circular queue. The status of the circular queue
is:
R

0
5
66

1
55 F RO NT = 2
REA R = ( REA R + 1) % 6 = 0
CO U NT = CO U NT + 1 = 4
44 33

3 2 F

Circ ular Que ue


Now, insert new elements 77 and 88 into the circular queue. The circular queue status
is:

0
5
66 77

88 1
55 F RO NT = 2, REA R = 2
REA R = REA R % 6 = 2
CO U NT = 6
44 33
R
3 2 F
Circ ular Que ue

Now, if we insert an element to the circular queue, as COUNT = MAX we cannot add the
element to circular queue. So, the circular queue is full.

4.8.2. Source code for Circular Queue operations, using array:

# include <stdio.h>
# include <conio.h>
# define MAX 6

int CQ[MAX];
int front = 0;
int rear = 0;
int count = 0;

void insertCQ()
{
int data;
if(count == MAX)
{
printf("\n Circular Queue is Full");
}
else
{
printf("\n Enter data: ");
scanf("%d", &data);
CQ[rear] = data;
rear = (rear + 1) % MAX;
count ++;
printf("\n Data Inserted in the Circular Queue ");
}
}

void deleteCQ()
{
if(count == 0)
{
printf("\n\nCircular Queue is Empty..");
}
else
{
printf("\n Deleted element from Circular Queue is %d ", CQ[front]);
front = (front + 1) % MAX;
count --;
}
}
void displayCQ()
{
int i, j;
if(count == 0)
{
printf("\n\n\t Circular Queue is Empty ");
}
else
{
printf("\n Elements in Circular Queue are: ");
j = count;
for(i = front; j != 0; j--)
{
printf("%d\t", CQ[i]);
i = (i + 1) % MAX;
}
}
}

int menu()
{
int ch;
clrscr();
printf("\n \t Circular Queue Operations using ARRAY..");
printf("\n -----------**********-------------\n");
printf("\n 1. Insert ");
printf("\n 2. Delete ");
printf("\n 3. Display");
printf("\n 4. Quit ");
printf("\n Enter Your Choice: ");
scanf("%d", &ch);
return ch;
}

void main()
{
int ch;
do
{
ch = menu();
switch(ch)
{
case 1:
insertCQ();
break;
case 2:
deleteCQ();
break;
case 3:
displayCQ();
break;
case 4:
return;
default:
printf("\n Invalid Choice ");
}
getch();
} while(1);
}
4.9. Deque:

In the preceding section we saw that a queue in which we insert items at one end and
from which we remove items at the other end. In this section we examine an extension
of the queue, which provides a means to insert and remove items at both ends of the
queue. This data structure is a deque. The word deque is an acronym derived from
double-ended queue. Figure 4.5 shows the representation of a deque.

Deletion
36 16 56 62 19

Insertion Deletion
front rear

Figure 4.5. Representation of a deque.

A deque provides four operations. Figure 4.6 shows the basic operations on a deque.

enqueue_front: insert an element at front.


dequeue_front: delete an element at front.
enqueue_rear: insert element at rear.
dequeue_rear: delete element at rear.

11 22 33 11 22 33 11 22 44

dequeue_front(33)

55 11 22 11 22 11 22 44

Figure 4.6. Basic operations on deque

There are two variations of deque. They are:

Input restricted deque (IRD)


Output restricted deque (ORD)

An Input restricted deque is a deque, which allows insertions at one end but allows
deletions at both ends of the list.

An output restricted deque is a deque, which allows deletions at one end but
allows insertions at both ends of the list.
4.10. Priority Queue:

A priority queue is a collection of elements such that each element has been assigned a
priority and such that the order in which elements are deleted and processed comes
from the following rules:

1. An element of higher priority is processed before any element of lower


priority.

2. two elements with same priority are processed according to the order in
which they were added to the queue.

A prototype of a priority queue is time sharing system: programs of high priority are
processed first, and programs with the same priority form a standard queue. An
efficient implementation for the Priority Queue is to use heap, which in turn can be
used for sorting purpose called heap sort.

Exercises

1. What is a linear data structure? Give two examples of linear data structures.

2. Is it possible to have two designs for the same data structure that provide the
same functionality but are implemented differently?

3. What is the difference between the logical representation of a data structure and
the physical representation?

4. Transform the following infix expressions to reverse polish notation:


a) A B * C D + E / F / (G + H)
b) ((A + B) * C (D E)) (F + G)
c) A B / (C * D E)
d) (a + b c d) * (e + f / d))
f) 3 6 * 7 + 2 / 4 * 5 8
g) (A B) / ((D + E) * F)
h) ((A + B) / D) ((E F) * G)

5. Evaluate the following postfix expressions:


a) P1: 5, 3, +, 2, *, 6, 9, 7, -, /, -
b) P2: 3, 5, +, 6, 4, -, *, 4, 1, -, 2, , +
c) P3 : 3, 1, +, 2, , 7, 4, -, 2, *, +, 5, -

6. Consider the usual algorithm to convert an infix expression to a postfix


expression. Suppose that you have read 10 input characters during a conversion
and that the stack now contains these symbols:

+
(
bottom *
Now, suppose that you read and process the 11th symbol of the input. Draw the
stack for the case where the 11th symbol is:
A. A number:
B. A left parenthesis:
C. A right parenthesis:
D. A minus sign:
E. A division sign:
7. Write a program using stack for parenthesis matching. Explain what modifications
would be needed to make the parenthesis matching algorithm check expressions
with different kinds of parentheses such as (), [] and {}'s.

8. Evaluate the following prefix expressions:


a) + * 2 + / 14 2 5 1
b) - * 6 3 4 1
c) + + 2 6 + - 13 2 4

9. Convert the following infix expressions to prefix notation:


a) ((A + 2) * (B + 4)) -1
b) Z ((((X + 1) * 2) 5) / Y)
c) ((C * 2) + 1) / (A + B)
d) ((A + B) * C (D - E)) (F + G)
e) A B / (C * D E)

10.
a) The stack is implemented using array.
b) The stack is implemented using linked list.

11. Write an algorithm to construct a fully parenthesized infix expression from its

12. How can one convert a postfix expression to its prefix equivalent and vice-versa?

13. A double-ended queue (deque) is a linear list where additions and deletions can
be performed at either end. Represent a deque using an array to store the

14. In a circular queue represented by an array, how can one specify the number of
elements in the queue in ter -QUEUE-SIZE? Write a
-

15. Can a queue be represented by a circular linked list with only one pointer pointing

on such a queue

16.
well formed or not.

17. Represent N queues in a single one-


operations on the ith queue

18. Represent a stack and queue in a single one-dimensional array. Write functions

queue.
Multiple Choice Questions

1. Which among the following is a linear data structure: [ D ]


A. Queue
B. Stack
C. Linked List
D. all the above

2. Which among the following is a Dynamic data structure: [ A ]


A. Double Linked List C. Stack
B. Queue D. all the above
3. Stack is referred as: [ A ]
A. Last in first out list C. both A and B
B. First in first out list D. none of the above

4. A stack is a data structure in which all insertions and deletions of entries [ A ]


are made at:
A. One end C. Both the ends
B. In the middle D. At any position

5. A queue is a data structure in which all insertions and deletions are made [ A ]
respectively at:
A. rear and front C. front and rear
B. front and front D. rear and rear

6. Transform the following infix expression to postfix form: [ D ]


(A + B) * (C D) / E
A. A B * C + D / - C. A B + C D * - / E
B. A B C * C D / - + D. A B + C D - * E /

7. Transform the following infix expression to postfix form: [ B ]


A - B / (C * D)
A. A B * C D - / C. / - D C * B A
B. A B C D * / - D. - / * A B C D

8. Evaluate the following prefix expression: * - + 4 3 5 / + 2 4 3 [ A ]

A. 4 C. 1
B. 8 D. none of the above

9. Evaluate the following postfix expression: 1 4 18 6 / 3 + + 5 / + [ C ]


A. 8 C. 3
B. 2 D. none of the above

10. Transform the following infix expression to prefix form: [ B ]


((C * 2) + 1) / (A + B)
A. A B + 1 2 C * + / C. / * + 1 2 C A B +
B. / + * C 2 1 + A B D. none of the above

11. Transform the following infix expression to prefix form: [ D ]


Z ((((X + 1) * 2) 5) / Y)
A. / - * + X 1 2 5 Y C. / * - + X 1 2 5 Y
B. Y 5 2 1 X + * - / D. none of the above

12. Queue is also known as: [ B ]


A. Last in first out list C. both A and B
B. First in first out list D. none of the above
13. One difference between a queue and a stack is: [ C ]
A. Queues require dynamic memory, but stacks do not.
B. Stacks require dynamic memory, but queues do not.
C. Queues use two ends of the structure; stacks use only one.
D. Stacks use two ends of the structure, queues use only one.

14. If the characters 'D', 'C', 'B', 'A' are placed in a queue (in that order), and [ D ]
then removed one at a time, in what order will they be removed?
A. ABCD C. DCAB
B. ABDC D. DCBA

15. Suppose we have a circular array implementation of the queue class, [ D ]


with ten items in the queue stored at data[2] through data[11]. The
CAPACITY is 42. Where does the push member function place the new
entry in the array?
A. data[1] C. data[11]
B. data[2] D. data[12]

16. Consider the implementation of the queue using a circular array. What [ B ]
goes wrong if we try to keep all the items at the front of a partially-filled
array (so that data[0] is always the front).
A. The constructor would require linear time.
B. The get_front function would require linear time.
C. The insert function would require linear time.
D. The is_empty function would require linear time.

17. In the linked list implementation of the queue class, where does the push [ A ]
member function place the new entry on the linked list?
A. At the head
B. At the tail
C. After all other entries that are greater than the new entry.
D. After all other entries that are smaller than the new entry.

18. In the circular array version of the queue class (with a fixed-sized array), [ ]
which operations require linear time for their worst-case behavior?
A. front C. empty
B. push D. None of these.

19. In the linked-list version of the queue class, which operations require [ ]
linear time for their worst-case behavior?
A. front C. empty
B. push D. None of these operations.

20. To implement the queue with a linked list, keeping track of a front [ B ]
pointer and a rear pointer. Which of these pointers will change during an
insertion into a NONEMPTY queue?
A. Neither changes C. Only rear_ptr changes.
B. Only front_ptr changes. D. Both change.

21. To implement the queue with a linked list, keeping track of a front [ D ]
pointer and a rear pointer. Which of these pointers will change during an
insertion into an EMPTY queue?
A. Neither changes C. Only rear_ptr changes.
B. Only front_ptr changes. D. Both change.
Suppose top is called on a priority queue that has exactly two entries [ B ]
with equal priority. How is the return value of top selected?
A. The implementation gets to choose either one.
B. The one which was inserted first.
C. The one which was inserted most recently.
D. This can never happen (violates the precondition)

Entries in a stack are "ordered". What is the meaning of this statement? [ D ]


A. A collection of stacks can be sorted.
B. Stack entries may be compared with the '<' operation.
C. The entries must be stored in a linked list.
D. There is a first entry, a second entry, and so on.

The operation for adding an entry to a stack is traditionally called: [ D ]


A. add C. insert
B. append D. push

The operation for removing an entry from a stack is traditionally called: [ C ]


A. delete C. pop
B. peek D. remove
Which of the following stack operations could result in stack underflow? [ A ]
A. is_empty C. push
B. pop D. Two or more of the above answers

Which of the following applications may use a stack? [ D ]


A. A parentheses balancing program.
B. Keeping track of local variables at run time.
C. Syntax analyzer for a compiler.
D. All of the above.

Here is an infix expression: 4 + 3 * (6 * 3 - 12). Suppose that we are [ D ]


using the usual stack algorithm to convert the expression from infix to
postfix notation. What is the maximum number of symbols that will
appear on the stack AT ONE TIME during the conversion of this
expression?
A. 1 C. 3
B. 2 D. 4

What is the value of the postfix expression 6 3 2 4 + - * [ A ]


A. Something between -15 and -100
B. Something between -5 and -15
C. Something between 5 and -5
D. Something between 5 and 15
E. Something between 15 and 100

If the expression ((2 + 3) * 4 + 5 * (6 + 7) * 8) + 9 is evaluated with * [ A ]


having precedence over +, then the value obtained is same as the value
of which of the following prefix expressions?
A. + + * + 2 3 4 * * 5 + 6 7 8 9 C. * + + + 2 3 4 * * 5 + 6 7 8 9
B. + * + + 2 3 4 * * 5 + 6 7 8 9 D. + * + + 2 3 4 + + 5 * 6 7 8 9

Evaluate the following prefix expression: [ B ]


+ * 2 + / 14 2 5 1
A. 50 C. 40
B. 25 D. 15
32 Parenthesis are never needed prefix or postfix expression: [A ]
A. True C. Cannot be expected
B. False D. None of the above

33 A postfix expression is merely the reverse of the prefix expression: [B ]


A. True C. Cannot be expected
B. False D. None of the above

34 Which among the following data structure may give overflow error, even [ A ]
though the current number of elements in it, is less than its size:
A. Simple Queue C. Stack
B. Circular Queue D. None of the above

35. Which among the following types of expressions does not require [ C ]
precedence rules for evaluation:
A. Fully parenthesized infix expression
B. Prefix expression
C. both A and B
D. none of the above

36. Conversion of infix arithmetic expression to postfix expression uses: [ D ]


A. Stack C. linked list
B. circular queue D. Queue
Chapter
5
Binary Trees
A data structure is said to be linear if its elements form a sequence or a
linear list. Previous linear data structures that we have studied like an
array, stacks, queues and linked lists organize data in linear order. A data
structure is said to be non linear if its elements form a hierarchical
classification where, data items appear at various levels.

Trees and Graphs are widely used non-linear data structures. Tree and
graph structures represents hierarchial relationship between individual
data elements. Graphs are nothing but trees with certain restrictions
removed.

In this chapter in particular, we will explain special type of trees known as


binary trees, which are easy to maintain in the computer.

5.1. TREES:

A tree is hierarchical collection of nodes. One of the nodes, known as the root, is at the
top of the hierarchy. Each node can have at most one link coming into it. The node
where the link originates is called the parent node. The root node has no parent. The
links leaving a node (any number of links are allowed) point to child nodes. Trees are
recursive structures. Each child node is itself the root of a subtree. At the bottom of
the tree are leaf nodes, which have no children.

Trees represent a special case of more general structures known as graphs. In a graph,
there is no restrictions on the number of links that can enter or leave a node, and
cycles may be present in the graph. The figure 5.1.1 shows a tree and a non-tree.

b c b c

d e f d

Not a Tree

In a tree data structure, there is no distinction between the various children of a node
i.e., none is the "first child" or "last child". A tree in which such distinctions are made is
called an ordered tree, and data structures built on them are called ordered tree
data structures. Ordered trees are by far the commonest form of tree data structure.
5.2. BINARY TREE:

In general, tree nodes can have any number of children. In a binary tree, each node
can have at most two children. A binary tree is either empty or consists of a node
called the root together with two binary trees called the left subtree and the right
subtree.

A tree with no nodes is called as a null tree. A binary tree is shown in figure 5.2.1.

B C right child

left subtree D E F G

H I

Figure 5.2.1. Binary Tree

Binary trees are easy to implement because they have a small, fixed number of child
links. Because of this characteristic, binary trees are the most common types of trees
and form the basis of many important data structures.

Tree Terminology:

Leaf node

A node with no children is called a leaf (or external node). A node which is not a
leaf is called an internal node.

Path
A sequence of nodes n1, n2, . . ., nk, such that ni is the parent of ni + 1 for i = 1,
2,. . ., k - 1. The length of a path is 1 less than the number of nodes on the
path. Thus there is a path of length zero from a node to itself.

For the tree shown in figure 5.2.1, the path between A and I is A, B, D, I.

Siblings

The children of the same parent are called siblings.

For the tree shown in figure 5.2.1, F and G are the siblings of the parent node C
and H and I are the siblings of the parent node D.

Ancestor and Descendent

If there is a path from node A to node B, then A is called an ancestor of B and


B is called a descendent of A.

Subtree

Any node of a tree, with all of its descendants is a subtree.


Level
The level of the node refers to its distance from the root. The root of the tree
has level O, and the level of any other node in the tree is one more than the
level of its parent. For example, in the binary tree of Figure 5.2.1 node F is at
level 2 and node H is at level 3. The maximum number of nodes at any level is
2 n.

Height

The maximum level in a tree determines its height. The height of a node in a
tree is the length of a longest path from the node to a leaf. The term depth is
also used to denote height of the tree. The height of the tree of Figure 5.2.1 is
3.

Depth
The depth of a node is the number of nodes along the path from the root to that

Assigning level numbers and Numbering of nodes for a binary tree:

The nodes of a binary tree can be numbered in a natural way, level by level, left
to right. The nodes of a complete binary tree can be numbered so that the root
is assigned the number 1, a left child is assigned twice the number assigned its
parent, and a right child is assigned one more than twice the number assigned
its parent. For example, see Figure 5.2.2.

2 3

4 5 6 7

Figure 5.2.2. Level by level numbering of binary tree

Properties of binary trees:

Some of the important properties of a binary tree are as follows:

1. If h = height of a binary tree, then

a. Maximum number of leaves = 2h

b. Maximum number of nodes = 2h + 1 - 1

2. If a binary tree contains m nodes at level l, it contains at most 2m nodes at level


l + 1.

3. Since a binary tree can contain at most one node at level 0 (the root), it can
contain at most 2l node at level l.

4. The total number of edges in a full binary tree with n node is n - 1.


Strictly Binary tree:

If every non-leaf node in a binary tree has nonempty left and right subtrees, the
tree is termed as strictly binary tree. Thus the tree of figure 5.2.3(a) is strictly
binary. A strictly binary tree with n leaves always contains 2n - 1 nodes.

Full Binary tree:

A full binary tree of height h has all its leaves at level h. Alternatively; All non
leaf nodes of a full binary tree have two children, and the leaf nodes have no
children.

A full binary tree with height h has 2h + 1 - 1 nodes. A full binary tree of height h
is a strictly binary tree all of whose leaves are at level h. Figure 5.2.3(d)
illustrates the full binary tree containing 15 nodes and of height 3.
A full binary tree of height h contains 2 h leaves and, 2h - 1 non-leaf nodes.
h
2l 2h 1
1.
l 0

For example, a full binary tree of height 3 contains 2 3+1 1 = 15 nodes.

1
1 Strict Binary Tree
(a)
2 3
2 3

6 7 4 5 6

13 8 9 Strictly Complete
binary tree

1
1

2 3
2 3

4 5 6
4 5 6 7

8 9 10 11 12 13 14 15
9 10

(c) Full binary tree

Figure 5.2.3. Examples of binary trees

Complete Binary tree:

A binary tree with n nodes is said to be complete if it contains all the first n
nodes of the above numbering scheme. Figure 5.2.4 shows examples of
complete and incomplete binary trees.

A complete binary tree of height h looks like a full binary tree down to level h-1,
and the level h is filled from left to right.
A complete binary tree with n leaves that is not strictly binary has 2n nodes. For
example, the tree of Figure 5.2.3(c) is a complete binary tree having 5 leaves
and 10 nodes.

1 1

2 3 2 3 2

4 5 6 4 5 7 4

(a) (c)

Figure 5.2.4. Examples of complete and incomplete binary trees

Internal and external nodes:

We define two terms: Internal nodes and external nodes. An internal node is a tree
node having at least one key and possibly some children. It is some times convenient
to have another types of nodes, called an external node, and pretend that all null child

place holder for nodes to be inserted.

We draw internal nodes using circles, with letters as labels. External nodes are denoted
by squares. The square node version is sometimes called an extended binary tree. A
binary tree with n internal nodes has n+1 external nodes. Figure 5.2.6 shows a sample
tree illustrating both internal and external nodes.

a d Internal Nodes: a, b, c, d

b 4 5

2 3

Figure 5.2.6. Internal and external nodes

Data Structures for Binary Trees:

1. Arrays; especially suited for complete and full binary trees.


2. Pointer-based.

Array-based Implementation:

Binary trees can also be stored in arrays, and if the tree is a complete binary tree, this
method wastes no space. In this compact arrangement, if a node has an index i, its
children are found at indices 2i+1 and 2i+2, while its parent (if any) is found at index
floor((i-1)/2) (assuming the root of the tree stored in the array at an index zero).
This method benefits from more compact storage and better locality of reference,
particularly during a preorder traversal. However, it requires contiguous memory,
expensive to grow and wastes space proportional to 2 h - n for a tree of height h with n
nodes.

0 1 2 3 4 5 6

Linked Representation (Pointer based):

Array representation is good for complete binary tree, but it is wasteful for many other
binary trees. The representation suffers from insertion and deletion of node from the
middle of the tree, as it requires the moment of potentially many nodes to reflect the
change in level number of this node. To overcome this difficulty we represent the
binary tree in linked representation.
In linked representation each node in a binary has three fields, the left child field
denoted as LeftChild, data field denoted as data and the right child field denoted as
RightChild. If any sub-
RightChild will store a NULL value. If the tree itself is empty the root pointer will store a
NULL value.

The advantage of using linked representation of binary tree is that:

Insertion and deletion involve no data movement and no movement of nodes


except the rearrangement of pointers.

The disadvantages of linked representation of binary tree includes:

Given a node structure, it is difficult to determine its parent node.

Memory spaces are wasted for storing NULL pointers for the nodes, which
have no subtrees.

The structure definition, node representation empty binary tree is shown in figure 5.2.6
and the linked representation of binary tree using this node structure is given in figure
5.2.7.

struct binarytree node:


{
struct binarytree *LeftChild;
int data; data RightChild
struct binarytree *RightChild;
};
Empty Tree: root
typedef struct binarytree node;
NULL
node *root = NULL;
Figure 5.2.6. Structure definition, node representation and empty tree
A root

B
A

D E
B C
H I

D X E X X F X X G X

X H X X I X

Figure 5.2.7. Linked representation for the binary tree

5.3. Binary Tree Traversal Techniques:

A tree traversal is a method of visiting every node in the tree. By visit, we mean that
some type of operation is performed. For example, you may wish to print the contents
of the nodes.

There are four common ways to traverse a binary tree:

1. Preorder
2. Inorder
3. Postorder
4. Level order

In the first three traversal methods, the left subtree of a node is traversed before the
right subtree. The difference among them comes from the difference in the time at
which a root node is visited.

5.3.1. Recursive Traversal Algorithms:

Inorder Traversal:

In the case of inorder traversal, the root of each subtree is visited after its left subtree
has been traversed but before the traversal of its right subtree begins. The steps for
traversing a binary tree in inorder traversal are:

1. Visit the left subtree, using inorder.


2. Visit the root.
3. Visit the right subtree, using inorder.

The algorithm for inorder traversal is as follows:

void inorder(node *root)


{
if(root != NULL)
{
inorder(root->lchild);
print root -> data;
inorder(root->rchild);
}
}

Preorder Traversal:

In a preorder traversal, each root node is visited before its left and right subtrees are
traversed. Preorder search is also called backtracking. The steps for traversing a binary
tree in preorder traversal are:

1. Visit the root.


2. Visit the left subtree, using preorder.
3. Visit the right subtree, using preorder.

The algorithm for preorder traversal is as follows:

void preorder(node *root)


{
if( root != NULL )
{
print root -> data;
preorder (root -> lchild);
preorder (root -> rchild);
}
}

Postorder Traversal:

In a postorder traversal, each root is visited after its left and right subtrees have been
traversed. The steps for traversing a binary tree in postorder traversal are:

1. Visit the left subtree, using postorder.


2. Visit the right subtree, using postorder
3. Visit the root.

The algorithm for postorder traversal is as follows:

void postorder(node *root)


{
if( root != NULL )
{
postorder (root -> lchild);
postorder (root -> rchild);
print (root -> data);
}
}

Level order Traversal:

In a level order traversal, the nodes are visited level by level starting from the root,
and going from left to right. The level order traversal requires a queue data structure.
So, it is not possible to develop a recursive procedure to traverse the binary tree in
level order. This is nothing but a breadth first search technique.
The algorithm for level order traversal is as follows:

void levelorder()
{
int j;
for(j = 0; j < ctr; j++)
{
if(tree[j] != NULL)
print tree[j] -> data;
}
}

Example 1:

Traverse the following binary tree in pre, post, inorder and level order.

A Preo rder traversal yields: A,


B, D, C , E, G , F , H, I
B C
Posto rder traversal yields: D,
B, G , E, H, I, F , C , A
D E F
Ino rder traversal yields: D,
G H I B, A, E, G , C , H, F , I

Level o rder traversal yields: A,


B, C , D, E, F , G , H, I

Bina ry T re e Pre, P o st, Inorder a nd lev e l order T rav ers ing

Example 2:

Traverse the following binary tree in pre, post, inorder and level order.

Preo rder traversal yields:


P
P, F , B, H, G , S, R, Y, T, W, Z
F S
Posto rder traversal yields:
B H R Y B, G , H, F , R, W, T, Z, Y, S, P

Ino rder traversal yields:


G T Z B, F , G , H, P, R, S, T, W, Y, Z

W Level o rder traversal yields: P, F ,


S, B, H, R, Y, G , T, Z, W

Bina ry T re e Pre, P o st, Inorder a nd lev e l order T rav ers ing


Example 3:

Traverse the following binary tree in pre, post, inorder and level order.

Preo rder traversal yields:


2
2, 7 , 2 , 6 , 5 , 11 , 5 , 9 , 4
7
Posto rder travarsal yields:
2, 5 , 11 , 6 , 7 , 4 , 9 , 5 , 2
2 6 9

Ino rder travarsal yields:


5 11 2, 7 , 5 , 6 , 11 , 2 , 5 , 4 , 9

Level o rder traversal yields:


2, 7 , 5 , 2 , 6 , 9 , 5 , 11 , 4

Pre, P o st, Inorder a nd lev e l order T rav ers ing

Example 4:

Traverse the following binary tree in pre, post, inorder and level order.

Preo rder traversal yields: A, B,


A D, G , K, H, L, M , C , E
B C
Posto rder travarsal yields: K,
G , L, M , H, D, B, E, C , A
D E
Ino rder travarsal yields: K, G ,
G H D, L, H, M , B, A, E, C

K L M Level o rder traversal yields: A,


B, C , D, E, G , H, K, L, M
Bina ry T re e Pre, P o st, Inorder a nd lev e l order T rav ers ing

5.3.2. Building Binary Tree from Traversal Pairs:

Sometimes it is required to construct a binary tree if its traversals are known. From a
single traversal it is not possible to construct unique binary tree. However any of the
two traversals are given then the corresponding tree can be drawn uniquely:

Inorder and preorder


Inorder and postorder
Inorder and level order

The basic principle for formulation is as follows:

If the preorder traversal is given, then the first node is the root node. If the postorder
traversal is given then the last node is the root node. Once the root node is identified,
all the nodes in the left sub-trees and right sub-trees of the root node can be identified
using inorder.

Same technique can be applied repeatedly to form sub-trees.


It can be noted that, for the purpose mentioned, two traversal are essential out of
which one should be inorder traversal and another preorder or postorder; alternatively,
given preorder and postorder traversals, binary tree cannot be obtained uniquely.

Example 1:

Construct a binary tree from a given preorder and inorder sequence:

Preorder: A B D G C E H I F
Inorder: D G B A H E I C F

Solution:

From Preorder sequence A B D G C E H I F, the root is: A

From Inorder sequence D G B A H E I C F, we get the left and right sub trees:

Left sub tree is: D G B

Right sub tree is: H E I C F

The Binary tree upto this point looks like:

DGB HEICF

To find the root, left and right sub trees for D G B:

From the preorder sequence B D G, the root of tree is: B

From the inorder sequence D G B, we can find that D and G are to the left of B.

The Binary tree upto this point looks like:

B HEICF

DG

To find the root, left and right sub trees for D G:

From the preorder sequence D G, the root of the tree is: D

From the inorder sequence D G, we can find that there is no left node to D and G is at
the right of D.
The Binary tree upto this point looks like:

B HEICF

To find the root, left and right sub trees for H E I C F:

From the preorder sequence C E H I F, the root of the left sub tree is: C

From the inorder sequence H E I C F, we can find that H E I are at the left of C and F is
at the right of C.

The Binary tree upto this point looks like:


A

B C

D F
HEI

To find the root, left and right sub trees for H E I:

From the preorder sequence E H I, the root of the tree is: E

From the inorder sequence H E I, we can find that H is at the left of E and I is at the
right of E.

The Binary tree upto this point looks like:

B C

D E F

G H I

Example 2:

Construct a binary tree from a given postorder and inorder sequence:

Inorder: D G B A H E I C F
Postorder: G D B H I E F C A
Solution:

From Postorder sequence G D B H I E F C A, the root is: A

From Inorder sequence D G B A H E I C F, we get the left and right sub trees:

Left sub tree is: D G B


Right sub tree is: H E I C F

The Binary tree upto this point looks like:

DGB HEICF

To find the root, left and right sub trees for D G B:

From the postorder sequence G D B, the root of tree is: B

From the inorder sequence D G B, we can find that D G are to the left of B and there is
no right subtree for B.

The Binary tree upto this point looks like:

B HEICF

DG

To find the root, left and right sub trees for D G:

From the postorder sequence G D, the root of the tree is: D

From the inorder sequence D G, we can find that is no left subtree for D and G is to the
right of D.

The Binary tree upto this point looks like:

B HEICF

To find the root, left and right sub trees for H E I C F:

From the postorder sequence H I E F C, the root of the left sub tree is: C

From the inorder sequence H E I C F, we can find that H E I are to the left of C and F is
the right subtree for C.
The Binary tree upto this point looks like:

B C

D HEI F

To find the root, left and right sub trees for H E I:

From the postorder sequence H I E, the root of the tree is: E

From the inorder sequence H E I, we can find that H is left subtree for E and I is to the
right of E.

The Binary tree upto this point looks like:

B C

D E F

G H I

Example 3:

Construct a binary tree from a given preorder and inorder sequence:

Inorder: n1 n2 n3 n4 n5 n6 n7 n8 n9
Preorder: n6 n2 n1 n4 n3 n5 n9 n7 n8

Solution:

From Preorder sequence n6 n2 n1 n4 n3 n5 n9 n7 n8, the root is: n6

From Inorder sequence n1 n2 n3 n4 n5 n6 n7 n8 n9, we get the left and right sub
trees:

Left sub tree is: n1 n2 n3 n4 n5

Right sub tree is: n7 n8 n9

The Binary tree upto this point looks like:

n6

n1 n2 n3 n4 n5 n7 n8 n9
To find the root, left and right sub trees for n1 n2 n3 n4 n5:

From the preorder sequence n2 n1 n4 n3 n5, the root of tree is: n2

From the inorder sequence n1 n2 n3 n4 n5, we can find that n1 is to the left of n2 and
n3 n4 n5 are to the right of n2. The Binary tree upto this point looks like:

n6

n2 n7 n8 n9

n1 n3 n4 n5

To find the root, left and right sub trees for n3 n4 n5:

From the preorder sequence n4 n3 n5, the root of the tree is: n4

From the inorder sequence n3 n4 n5, we can find that n3 is to the left of n4 and n5 is
at the right of n4.

The Binary tree upto this point looks like:

n6

n2 n7 n8 n9

n1 n4

n3 n5

To find the root, left and right sub trees for n7 n8 n9:

From the preorder sequence n9 n7 n8, the root of the left sub tree is: n9

From the inorder sequence n7 n8 n9, we can find that n7 and n8 are at the left of n9
and no right subtree of n9.

The Binary tree upto this point looks like:

n6

n9
n2

n1 n4 n7 n8

n3 n5

To find the root, left and right sub trees for n7 n8:

From the preorder sequence n7 n8, the root of the tree is: n7
From the inorder sequence n7 n8, we can find that is no left subtree for n7 and n8 is at
the right of n7.

The Binary tree upto this point looks like:

n6

n2 n9

n1 n4 n7

n3 n5 n8

Example 4:

Construct a binary tree from a given postorder and inorder sequence:

Inorder: n1 n2 n3 n4 n5 n6 n7 n8 n9
Postorder: n1 n3 n5 n4 n2 n8 n7 n9 n6

Solution:

From Postorder sequence n1 n3 n5 n4 n2 n8 n7 n9 n6, the root is: n6

From Inorder sequence n1 n2 n3 n4 n5 n6 n7 n8 n9, we get the left and right sub
trees:

Left sub tree is: n1 n2 n3 n4 n5


Right sub tree is: n7 n8 n9

The Binary tree upto this point looks like:

n6

n1 n2 n3 n4 n5

To find the root, left and right sub trees for n1 n2 n3 n4 n5:

From the postorder sequence n1 n3 n5 n4 n2, the root of tree is: n2

From the inorder sequence n1 n2 n3 n4 n5, we can find that n1 is to the left of n2 and
n3 n4 n5 are to the right of n2.

The Binary tree upto this point looks like:

n6

n2

n1
To find the root, left and right sub trees for n3 n4 n5:

From the postorder sequence n3 n5 n4, the root of the tree is: n4

From the inorder sequence n3 n4 n5, we can find that n3 is to the left of n4 and n5 is
to the right of n4. The Binary tree upto this point looks like:

n6

n2 n7 n8 n9

n1 n4

n3 n5

To find the root, left and right sub trees for n7 n8 and n9:

From the postorder sequence n8 n7 n9, the root of the left sub tree is: n9

From the inorder sequence n7 n8 n9, we can find that n7 and n8 are to the left of n9
and no right subtree for n9.

The Binary tree upto this point looks like:

n6

n2 n9

n1 n4 n7 n8

n3 n5

To find the root, left and right sub trees for n7 and n8:

From the postorder sequence n8 n7, the root of the tree is: n7

From the inorder sequence n7 n8, we can find that there is no left subtree for n7 and
n8 is to the right of n7. The Binary tree upto this point looks like:

n6

n2 n9

n1 n4 n7

n3 n5 n8
5.3.3. Binary Tree Creation and Traversal Using Arrays:

This program performs the following operations:

1. Creates a complete Binary Tree


2. Inorder traversal
3. Preorder traversal
4. Postorder traversal
5. Level order traversal
6. Prints leaf nodes
7. Finds height of the tree created

# include <stdio.h>
# include <stdlib.h>

struct tree
{
struct tree* lchild;
char data[10];
struct tree* rchild;
};

typedef struct tree node;


int ctr;
node *tree[100];

node* getnode()
{
node *temp ;
temp = (node*) malloc(sizeof(node));
printf("\n Enter Data: ");
scanf("%s",temp->data);
temp->lchild = NULL;
temp->rchild = NULL;
return temp;
}

void create_fbinarytree()
{
int j, i=0;
printf("\n How many nodes you want: ");
scanf("%d",&ctr);
tree[0] = getnode();
j = ctr;
j--;
do
{
if( j > 0 ) /* left child */
{
tree[ i * 2 + 1 ] = getnode();
tree[i]->lchild = tree[i * 2 + 1];
j--;
}
if( j > 0 ) /* right child */
{
tree[i * 2 + 2] = getnode();
j--;
tree[i]->rchild = tree[i * 2 + 2];
}
i++;
} while( j > 0);
}
void inorder(node *root)
{
if( root != NULL )
{
inorder(root->lchild);
printf("%3s",root->data);
inorder(root->rchild);
}
}

void preorder(node *root)


{
if( root != NULL )
{
printf("%3s",root->data);
preorder(root->lchild);
preorder(root->rchild);
}
}

void postorder(node *root)


{
if( root != NULL )
{
postorder(root->lchild);
postorder(root->rchild);
printf("%3s",root->data);
}
}

void levelorder()
{
int j;
for(j = 0; j < ctr; j++)
{
if(tree[j] != NULL)
printf("%3s",tree[j]->data);
}
}

void print_leaf(node *root)


{
if(root != NULL)
{
if(root->lchild == NULL && root->rchild == NULL)
printf("%3s ",root->data);
print_leaf(root->lchild);
print_leaf(root->rchild);
}
}

int height(node *root)


{
if(root == NULL)
{
return 0;
}
if(root->lchild == NULL && root->rchild == NULL)
return 0;
else
return (1 + max(height(root->lchild), height(root->rchild)));
}

void main()
{
int i;
create_fbinarytree();
printf("\n Inorder Traversal: ");
inorder(tree[0]);
printf("\n Preorder Traversal: ");
preorder(tree[0]);
printf("\n Postorder Traversal: ");
postorder(tree[0]);
printf("\n Level Order Traversal: ");
levelorder();
printf("\n Leaf Nodes: ");
print_leaf(tree[0]);
printf("\n Height of Tree: %d ", height(tree[0]));
}

5.3.4. Binary Tree Creation and Traversal Using Pointers:

This program performs the following operations:

1. Creates a complete Binary Tree


2. Inorder traversal
3. Preorder traversal
4. Postorder traversal
5. Level order traversal
6. Prints leaf nodes
7. Finds height of the tree created
8. Deletes last node
9. Finds height of the tree created

# include <stdio.h>
# include <stdlib.h>

struct tree
{
struct tree* lchild;
char data[10];
struct tree* rchild;
};

typedef struct tree node;


node *Q[50];
int node_ctr;

node* getnode()
{
node *temp ;
temp = (node*) malloc(sizeof(node));
printf("\n Enter Data: ");
fflush(stdin);
scanf("%s",temp->data);
temp->lchild = NULL;
temp->rchild = NULL;
return temp;
}
void create_binarytree(node *root)
{
char option;
node_ctr = 1;
if( root != NULL )
{
printf("\n Node %s has Left SubTree(Y/N)",root->data);
fflush(stdin);
scanf("%c",&option);
if( option=='Y' || option == 'y')
{
root->lchild = getnode();
node_ctr++;
create_binarytree(root->lchild);
}
else
{
root->lchild = NULL;
create_binarytree(root->lchild);
}

printf("\n Node %s has Right SubTree(Y/N) ",root->data);


fflush(stdin);
scanf("%c",&option);
if( option=='Y' || option == 'y')
{
root->rchild = getnode();
node_ctr++;
create_binarytree(root->rchild);
}
else
{
root->rchild = NULL;
create_binarytree(root->rchild);
}
}
}

void make_Queue(node *root,int parent)


{
if(root != NULL)
{
node_ctr++;
Q[parent] = root;
make_Queue(root->lchild,parent*2+1);
make_Queue(root->rchild,parent*2+2);
}
}

delete_node(node *root, int parent)


{
int index = 0;
if(root == NULL)
printf("\n Empty TREE ");
else
{
node_ctr = 0;
make_Queue(root,0);
index = node_ctr-1;
Q[index] = NULL;
parent = (index-1) /2;
if( 2* parent + 1 == index )
Q[parent]->lchild = NULL;
else
Q[parent]->rchild = NULL;
}
printf("\n Node Deleted ..");
}

void inorder(node *root)


{
if(root != NULL)
{
inorder(root->lchild);
printf("%3s",root->data);
inorder(root->rchild);
}
}

void preorder(node *root)


{
if( root != NULL )
{
printf("%3s",root->data);
preorder(root->lchild);
preorder(root->rchild);
}
}

void postorder(node *root)


{
if( root != NULL )
{
postorder(root->lchild);
postorder(root->rchild);
printf("%3s", root->data);
}
}

void print_leaf(node *root)


{
if(root != NULL)
{
if(root->lchild == NULL && root->rchild == NULL)
printf("%3s ",root->data);
print_leaf(root->lchild);
print_leaf(root->rchild);
}
}

int height(node *root)


{
if(root == NULL)
return -1;
else
return (1 + max(height(root->lchild), height(root->rchild)));
}

void print_tree(node *root, int line)


{
int i;
if(root != NULL)
{
print_tree(root->rchild,line+1);
printf("\n");
for(i=0;i<line;i++)
printf(" ");
printf("%s", root->data);
print_tree(root->lchild,line+1);
}
}

void level_order(node *Q[],int ctr)


{
int i;
for( i = 0; i < ctr ; i++)
{
if( Q[i] != NULL )
printf("%5s",Q[i]->data);
}
}

int menu()
{
int ch;
clrscr();
printf("\n 1. Create Binary Tree ");
printf("\n 2. Inorder Traversal ");
printf("\n 3. Preorder Traversal ");
printf("\n 4. Postorder Traversal ");
printf("\n 5. Level Order Traversal");
printf("\n 6. Leaf Node ");
printf("\n 7. Print Height of Tree ");
printf("\n 8. Print Binary Tree ");
printf("\n 9. Delete a node ");
printf("\n 10. Quit ");
printf("\n Enter Your choice: ");
scanf("%d", &ch);
return ch;
}

void main()
{
int i,ch;
node *root = NULL;
do
{
ch = menu();
switch( ch)
{
case 1 :
if( root == NULL )
{
root = getnode();
create_binarytree(root);
}
else
{
printf("\n Tree is already Created ..");
}
break;
case 2 :
printf("\n Inorder Traversal: ");
inorder(root);
break;
case 3 :
printf("\n Preorder Traversal: ");
preorder(root);
break;
case 4 :
printf("\n Postorder Traversal: ");
postorder(root);
break;
case 5:
printf("\n Level Order Traversal ..");
make_Queue(root,0);
level_order(Q,node_ctr);
break;
case 6 :
printf("\n Leaf Nodes: ");
print_leaf(root);
break;
case 7 :
printf("\n Height of Tree: %d ", height(root));
break;
case 8 :
printf("\n Print Tree \n");
print_tree(root, 0);
break;
case 9 :
delete_node(root,0);
break;
case 10 :
exit(0);
}
getch();
}while(1);
}

5.3.5. Non Recursive Traversal Algorithms:

At first glance, it appears that we would always want to use the flat traversal functions
since they use less stack space. But the flat versions are not necessarily better. For
instance, some overhead is associated with the use of an explicit stack, which may
negate the savings we gain from storing only node pointers. Use of the implicit function
call stack may actually be faster due to special machine instructions that can be used.

Inorder Traversal:

Initially push zero onto stack and then set root as vertex. Then repeat the following
steps until the stack is empty:

1. Proceed down the left most path rooted at vertex, pushing each vertex onto the
stack and stop when there is no left son of vertex.

2. Pop and process the nodes on stack if zero is popped then exit. If a vertex with
right son exists, then set right son of vertex as current vertex and return to
step one.

Algorithm inorder()
{
stack[1] = 0
vertex = root
top:
{
push the vertex into the stack
vertex = leftson(vertex)
}

pop the element from the stack and make it as vertex

{
print the vertex node

{
vertex = rightson(vertex)
goto top
}
pop the element from the stack and made it as vertex
}
}

Preorder Traversal:

Initially push zero onto stack and then set root as vertex. Then repeat the following
steps until the stack is empty:

1. Proceed down the left most path by pushing the right son of vertex onto stack, if
any and process each vertex. The traversing ends after a vertex with no left
child exists.

2.

Algorithm preorder( )
{
stack[1] = 0
vertex = root.

{
print vertex node

push the right son of vertex into the stack.

vertex = leftson(vertex)
else
pop the element from the stack and made it as vertex
}
}

Postorder Traversal:

Initially push zero onto stack and then set root as vertex. Then repeat the following
steps until the stack is empty:

1. Proceed down the left most path rooted at vertex. At each vertex of path push
vertex on to stack and if vertex has a right son push (right son of vertex) onto
stack.

2. Pop and process the positive nodes (left nodes). If zero is popped then exit. If a
negative node is popped, then ignore the sign and return to step one.
Algorithm postorder( )
{
stack[1] = 0
vertex = root

{
push vertex onto stack

push (vertex) onto stack


vertex = leftson(vertex)
}
pop from stack and make it as vertex
while(vertex > 0)
{
print the vertex node
pop from stack and make it as vertex
}
if(vertex < 0)
{
vertex = - (vertex)
goto top
}
}

Example 1:

Traverse the following binary tree in pre, post and inorder using non-recursive
traversing algorithm.

A
Preo rder traversal yields: A, B,
B C D, G , K, H, L, M , C , E

D E Posto rder travarsal yields: K,


G , L, M , H, D, B, E, C , A
GH
Ino rder travarsal yields: K, G ,
D, L, H, M , B, A, E, C
K L M

Bina ry T re e Pre, P o st a nd Inorder T rav ers ing

Inorder Traversal:

Initially push zero onto stack and then set root as vertex. Then repeat the following
steps until the stack is empty:

1. Proceed down the left most path rooted at vertex, pushing each vertex onto the
stack and stop when there is no left son of vertex.

2. Pop and process the nodes on stack if zero is popped then exit. If a vertex with
right son exists, then set right son of vertex as current vertex and return to step
one.
CURRENT
STACK PROCESSED NODES REMARKS
VERTEX
A 0 PUSH 0

0ABDGK PUSH the left most path of A

K 0ABDG K POP K

G 0ABD KG POP G since K has no right son

D 0AB KGD POP D since G has no right son

H 0AB KGD Make the right son of D as vertex

0ABHL KGD PUSH the leftmost path of H

L 0ABH KGDL POP L

H 0AB KGDLH POP H since L has no right son

M 0AB KGDLH Make the right son of H as vertex

0ABM KGDLH PUSH the left most path of M

M 0AB KGDLHM POP M

B 0A KGDLHMB POP B since M has no right son

A 0 KGDLHMBA Make the right son of A as vertex

C 0CE KGDLHMBA PUSH the left most path of C

E 0C KGDLHMBAE POP E

C 0 KGDLHMBAEC Stop since stack is empty

Postorder Traversal:

Initially push zero onto stack and then set root as vertex. Then repeat the following
steps until the stack is empty:

1. Proceed down the left most path rooted at vertex. At each vertex of path push
vertex on to stack and if vertex has a right son push (right son of vertex) onto
stack.

2. Pop and process the positive nodes (left nodes). If zero is popped then exit. If a
negative node is popped, then ignore the sign and return to step one.

CURRENT
STACK PROCESSED NODES REMARKS
VERTEX
A 0 PUSH 0
PUSH the left most path of A with a
0A CBD HGK
-ve for right sons
0A CBD H KG POP all +ve nodes K and G

H 0A CBD KG Pop H
PUSH the left most path of H with a
0A CBDH ML KG
-ve for right sons
L 0A CBDH M KGL POP all +ve nodes L

M 0A CBDH KGL Pop M


PUSH the left most path of M with a
0A CBDHM KGL
-ve for right sons
0A C KGLMHDB POP all +ve nodes M, H, D and B

C 0A KGLMHDB Pop C
PUSH the left most path of C with a
0ACE KGLMHDB
-ve for right sons
0 KGLMHDBECA POP all +ve nodes E, C and A

0 KGLMHDBECA Stop since stack is empty

Preorder Traversal:

Initially push zero onto stack and then set root as vertex. Then repeat the following
steps until the stack is empty:

1. Proceed down the left most path by pushing the right son of vertex onto stack, if
any and process each vertex. The traversing ends after a vertex with no left
child exists.

2.

CURRENT
STACK PROCESSED NODES REMARKS
VERTEX
A 0 PUSH 0
PUSH the right son of each vertex onto stack and
0CH ABDGK
process each vertex in the left most path
H 0C ABDGK POP H
PUSH the right son of each vertex onto stack and
0CM ABDGKHL
process each vertex in the left most path
M 0C ABDGKHL POP M
PUSH the right son of each vertex onto stack and
0C ABDGKHLM process each vertex in the left most path; M has
no left path
C 0 ABDGKHLM Pop C
PUSH the right son of each vertex onto stack and
0 ABDGKHLMCE process each vertex in the left most path; C has
no right son on the left most path
0 ABDGKHLMCE Stop since stack is empty
Example 2:

Traverse the following binary tree in pre, post and inorder using non-recursive
traversing algorithm.

2 Preo rder traversal yields:


2, 7 , 2 , 6 , 5 , 11 , 5 , 9 , 4
7
Posto rder travarsal yields:
2 6 9 2, 5 , 11 , 6 , 7 , 4 , 9 , 5 , 2

5 11
Ino rder travarsal yields:
2, 7 , 5 , 6 , 11 , 2 , 5 , 4 , 9

Pre, P o st a nd In order T rav ers ing

Inorder Traversal:

Initially push zero onto stack and then set root as vertex. Then repeat the following
steps until the stack is empty:

1. Proceed down the left most path rooted at vertex, pushing each vertex onto the
stack and stop when there is no left son of vertex.

2. Pop and process the nodes on stack if zero is popped then exit. If a vertex with
right son exists, then set right son of vertex as current vertex and return to step
one.

CURRENT VERTEX STACK PROCESSED NODES REMARKS

2 0

0272

2 027 2

7 02 27

6 0265 27

5 026 275

6 02 2756

11 0 2 11 2756

11 02 2 7 5 6 11

2 0 2 7 5 6 11 2

5 05 2 7 5 6 11 2

5 0 2 7 5 6 11 2 5

9 094 2 7 5 6 11 2 5

4 09 2 7 5 6 11 2 5 4

9 0 2 7 5 6 11 2 5 4 9 Stop since stack is empty


Postorder Traversal:

Initially push zero onto stack and then set root as vertex. Then repeat the following
steps until the stack is empty:

1. Proceed down the left most path rooted at vertex. At each vertex of path push
vertex on to stack and if vertex has a right son push (right son of vertex) onto
stack.

2. Pop and process the positive nodes (left nodes). If zero is popped then exit. If a
negative node is popped, then ignore the sign and return to step one.

CURRENT VERTEX STACK PROCESSED NODES REMARKS


2 0
02 57 62
2 02 57 6 2
6 02 57 2
0 2 5 7 6 11 5 2
5 0 2 5 7 6 11 25
11 0 2 5 7 6 11 25
02 5 2 5 11 6 7
5 025 9 2 5 11 6 7
9 02594 2 5 11 6 7
0 2 5 11 6 7 4 9 5 2 Stop since stack is empty

Preorder Traversal:

Initially push zero onto stack and then set root as vertex. Then repeat the following
steps until the stack is empty:

1. Proceed down the left most path by pushing the right son of vertex onto stack, if
any and process each vertex. The traversing ends after a vertex with no left
child exists.

2. Pop the vertex

CURRENT VERTEX STACK PROCESSED NODES REMARKS


2 0
056 272
6 0 5 11 27265
11 05 2 7 2 6 5 11
05 2 7 2 6 5 11
5 09 2 7 2 6 5 11 5
9 0 2 7 2 6 5 11 5 9 4
0 2 7 2 6 5 11 5 9 4 Stop since stack is empty
5.4. Expression Trees:

Expression tree is a binary tree, because all of the operations are binary. It is also
possible for a node to have only one child, as is the case with the unary minus
operator. The leaves of an expression tree are operands, such as constants or variable
names, and the other (non leaf) nodes contain operators.

Once an expression tree is constructed we can traverse it in three ways:

Inorder Traversal
Preorder Traversal
Postorder Traversal

Figure 5.4.1 shows some more expression trees that represent arithmetic expressions
given in infix form.

+ +

+ / + d

a d + c

(a)
a b

+ *

- + + *

a x y a

(c) ((-a) + (x + y)) / ((+b) * (c * a))

Figure 5.4.1 Expression Trees

An expression tree can be generated for the infix and postfix expressions.

An algorithm to convert a postfix expression into an expression tree is as follows:

1. Read the expression one symbol at a time.

2. If the symbol is an operand, we create a one-node tree and push a pointer to


it onto a stack.

3. If the symbol is an operator, we pop pointers to two trees T1 and T2 from


the stack (T1 is popped first) and form a new tree whose root is the operator
and whose left and right children point to T2 and T1 respectively. A pointer
to this new tree is then pushed onto the stack.
Example 1:

Construct an expression tree for the postfix expression: a b + c d e + * *

Solution:

The first two symbols are operands, so we create one-node trees and push pointers to
them onto a stack.

a b

tree is formed, and a


pointer to it is pushed onto the stack.

a b

Next, c, d, and e are read, and for each one node tree is created and a pointer to the
corresponding tree is pushed onto the stack.

a b c d e

trees are merged.

+
+ c +

a bb d e
root.

+
+ *

a b c +

d ee

Finally, the last symbol is read, two trees are merged, and a pointer to the final tree is
left on the stack.

+
*

+ *

a b c +

d e
e

For the above tree:


Inorder form of the expression: a + b * c * d + e
Preorder form of the expression: * + a b * c + d e
Postorder form of the expression: a b + c d e + * *

Example 2:

Construct an expression tree for the arithmetic expression:

(A + B * C) ((D * E + F) / G)

Solution:

First convert the infix expression into postfix notation. Postfix notation of the arithmetic
expression is: A B C * + D E * F + G / -

The first three symbols are operands, so we create one-node trees and pointers to
three nodes pushed onto the stack.
A B C

pointer to it is pushed onto the stack.

A *

B C

pointer to it is pushed onto the stack.

A *

B C

Next, D and E are read, and for each one node tree is created and a pointer to the
corresponding tree is pushed onto the stack.

+ D E

A *

B C

root.

+ *

A * D E

B C
Proceeding similar to the previous steps, finally, when the last symbol is read, the
expression tree is as follows:

+
-

+ /

A * + G

B C * F

D E

5.4.1. Converting expressions with expression trees:

Let us convert the following expressions from one type to another. These can be as
follows:

1. Postfix to infix
2. Postfix to prefix
3. Prefix to infix
4. Prefix to postfix

1. Postfix to Infix:

The following algorithm works for the expressions whose infix form does not require
parenthesis to override conventional precedence of operators.

A. Create the expression tree from the postfix expression


B. Run inorder traversal on the tree.

2. Postfix to Prefix:

The following algorithm works for the expressions to convert postfix to prefix:

A. Create the expression tree from the postfix expression


B. Run preorder traversal on the tree.

3. Prefix to Infix:

The following algorithm works for the expressions whose infix form does not require
parenthesis to override conventional precedence of operators.

A. Create the expression tree from the prefix expression


B. Run inorder traversal on the tree.
4. Prefix to postfix:

The following algorithm works for the expressions to convert postfix to prefix:

A. Create the expression tree from the prefix expression


B. Run postorder traversal on the tree.

5.5. Threaded Binary Tree:

The linked representation of any binary tree has more null links than actual pointers. If
there are 2n total links, there are n+1 null links. A clever way to make use of these null
links has been devised by A.J. Perlis and C. Thornton.

Their idea is to replace the null links by pointers called Threads to other nodes in the
tree.

If the RCHILD(p) is normally equal to zero, we will replace it by a pointer to the node
which would be printed after P when traversing the tree in inorder.

A null LCHILD link at node P is replaced by a pointer to the node which immediately
precedes node P in inorder. For example, Let us consider the tree:

B C

D E F G

H I

The Threaded Tree corresponding to the above tree is:

B C

D E F G

H I

The tree has 9 nodes and 10 null links which have been replaced by Threads. If we
traverse T in inorder the nodes will be visited in the order H D I B E A F C G.

distinguished between as by adding two extra one bit fields LBIT and RBIT.

LBIT(P) = 1 if LCHILD(P) is a normal pointer


LBIT(P) = 0 if LCHILD(P) is a Thread

RBIT(P) = 1 if RCHILD(P) is a normal pointer


RBIT(P) = 0 if RCHILD(P) is a Thread
In the above figure two threads have been left dangling in LCHILD(H) and RCHILD(G).
In order to have no loose Threads we will assume a head node for all threaded binary
trees. The Complete memory representation for the tree is as follows. The tree T is the
left sub-tree of the head node.

LBIT LCHILD DATA RCHILD RBIT

1 - 1

1 A 1

1 B 1 1 C 1

1 D 1 0 E 0 0 F 0 0 G 0

H 0 0 I 0

5.6. Binary Search Tree:

A binary search tree is a binary tree. It may be empty. If it is not empty then it
satisfies the following properties:

1. Every element has a key and no two elements have the same key.

2. The keys in the left subtree are smaller than the key in the root.

3. The keys in the right subtree are larger than the key in the root.

4. The left and right subtrees are also binary search trees.

Figure 5.2.5(a) is a binary search tree, whereas figure 5.2.5(b) is not a binary search
tree.

16 16

12 20 12 20

11 14 19 11 14 197

13 13 17

Not a Binary Search Tree


(b)

trees
5.7. AVL Tree

Differences between trees and binary trees:

TREE BINARY TREE


Each element in a tree can have any Each element in a binary tree has at most
number of subtrees. two subtrees.

The subtrees in a tree are unordered. The subtrees of each element in a binary
tree are ordered (i.e. we distinguish
between left and right subtrees).

AVL Tree Data structure

AVL tree is a height-balanced binary search tree. That means, an AVL tree is also a binary
search tree but it is a balanced tree. A binary tree is said to be balanced if, the difference
between the heights of left and right subtrees of every node in the tree is either -1, 0 or
+1. In other words, a binary tree is said to be balanced if the height of left and right
children of every node differ by either -1, 0 or +1. In an AVL tree, every node maintains
extra information known as balance factor. The AVL tree was introduced in the year
1962 by G.M. Adelson-Velsky and E.M. Landis.

An AVL tree is defined as follows...

An AVL tree is a balanced binary search tree. In an AVL tree, balance factor of
every node is either -1, 0 or +1.

Balance factor of a node is the difference between the heights of the left and right
subtrees of that node. The balance factor of a node is calculated either height of left
subtree - height of right subtree (OR) height of right subtree - height of left
subtree. In the following explanation, we calculate as follows...

Balance factor = height Of LeftSubtree height Of RightSubtree

Example of AVL Tree


The above tree is a binary search tree and every node is satisfying balance factor
condition. So this tree is said to be an AVL tree.

Every AVL Tree is a binary search tree but every Binary Search Tree need not be
AVL tree.

AVL Tree Rotations

In AVL tree, after performing operations like insertion and deletion we need to check
the balance factor of every node in the tree. If every node satisfies the balance factor
condition then we conclude the operation otherwise we must make it balanced. Whenever
the tree becomes imbalanced due to any operation we use rotation operations to make
the tree balanced.

Rotation operations are used to make the tree balanced.

Rotation is the process of moving nodes either to left or to right to make the tree
balanced.

There are four rotations and they are classified into two types.

Single Left Rotation (LL Rotation)

In LL Rotation, every node moves one position to left from the current position. To
understand LL Rotation, let us consider the following insertion operation in AVL Tree...

Single Right Rotation (RR Rotation)

In RR Rotation, every node moves one position to right from the current position. To
understand RR Rotation, let us consider the following insertion operation in AVL Tree...
Left Right Rotation (LR Rotation)

The LR Rotation is a sequence of single left rotation followed by a single right rotation. In
LR Rotation, at first, every node moves one position to the left and one position to right
from the current position. To understand LR Rotation, let us consider the following
insertion operation in AVL Tree...

Right Left Rotation (RL Rotation)

The RL Rotation is sequence of single right rotation followed by single left rotation. In RL
Rotation, at first every node moves one position to right and one position to left from the
current position. To understand RL Rotation, let us consider the following insertion
operation in AVL Tree...

Operations on an AVL Tree

The following operations are performed on AVL tree...


1. Search

2. Insertion

3. Deletion

Search Operation in AVL Tree

In an AVL tree, the search operation is performed with O(log n) time complexity. The
search operation in the AVL tree is similar to the search operation in a Binary search tree.
We use the following steps to search an element in AVL tree...

Step 1 - Read the search element from the user.

Step 2 - Compare the search element with the value of root node in the tree.

Step 3 - If both are matched, then display "Given node is found!!!" and terminate
the function

Step 4 - If both are not matched, then check whether search element is smaller or
larger than that node value.

Step 5 - If search element is smaller, then continue the search process in left
subtree.

Step 6 - If search element is larger, then continue the search process in right
subtree.

Step 7 - Repeat the same until we find the exact element or until the search
element is compared with the leaf node.

Step 8 - If we reach to the node having the value equal to the search value, then
display "Element is found" and terminate the function.

Step 9 - If we reach to the leaf node and if it is also not matched with the search
element, then display "Element is not found" and terminate the function.

Insertion Operation in AVL Tree

In an AVL tree, the insertion operation is performed with O(log n) time complexity. In
AVL Tree, a new node is always inserted as a leaf node. The insertion operation is
performed as follows...

Step 1 - Insert the new element into the tree using Binary Search Tree insertion logic.
Step 2 - After insertion, check the Balance Factor of every node.
Step 3 - If the Balance Factor of every node is 0 or 1 or -1 then go for next operation.
Step 4 - If the Balance Factor of any node is other than 0 or 1 or -1 then that tree is
said to be imbalanced. In this case, perform suitable Rotation to make it balanced and go
for next operation.
Example: Construct an AVL Tree by inserting numbers from 1 to 8.
5.8. Search and Traversal Techniques for m-ary trees:

Search involves visiting nodes in a tree in a systematic manner, and may or may not
result into a visit to all nodes. When the search necessarily involved the examination of
every vertex in the tree, it is called the traversal. Traversing of a tree can be done in
two ways.

1. Depth first search or traversal.


2. Breadth first search or traversal.
5.8.1. Depth first search:

In Depth first search, we begin with root as a start state, then some successor of the
start state, then some successor of that state, then some successor of that and so on,
trying to reach a goal state. One simple way to implement depth first search is to use a
stack data structure consisting of root node as a start state.

If depth first search reaches a state S without successors, or if all the successors of a
state S have been chosen (visited) and a goal state has not get been found, then it

formally, the state

To illustrate this let us consider the tree shown below.

A
E

ST A RT J
S B

C F
K
I

Suppose S is the start and G is the only goal state. Depth first search will first visit S,
then A, then D. But D has no successors, so we must back up to A and try its second

goal state G so we

back up to S again and choose its third successor, C. C has one successor, F. The first
successors, so we back up

So the solution path to the goal is S, C, F, H and G and the states considered were in
order S, A, D, E, B, C, F, H, J, G.

Disadvantages:

1. It works very fine when search graphs are trees or lattices, but can get
struck in an infinite loop on graphs. This is because depth first search can
travel around a cycle in the graph forever.

To eliminate this keep a list of states previously visited, and never permit
search to return to any of them.

2. We cannot come up with shortest solution to the problem.

5.8.2. Breadth first search:

Breadth-
from S. Breadth-first search discovers vertices in increasing order of distance. Breadth-
first search is named because it visits vertices across the entire breadth.
To illustrate this let us consider the following tree:

A
E

ST A RT J
S B

C F
K
I

Breadth first search finds states level by level. Here we first check all the immediate
successors of the start state. Then all the immediate successors of these, then all the
immediate successors of these, and so on until we find a goal node. Suppose S is the
start state and G is the goal state. In the figure, start state S is at level 0; A, B and C
are at level 1; D, e and F at level 2; H and I at level 3; and J, G and K at level 4.

So breadth first search, will consider in order S, A, B, C, D, E, F, H, I, J and G and then


stop because it has reached the goal node.

Breadth first search does not have the danger of infinite loops as we consider states in
order of increasing number of branches (level) from the start state.

One simple way to implement breadth first search is to use a queue data structure
consisting of just a start state.

5.9. Sparse Matrices:

A sparse matrix is a two dimensional array having the value of majority elements as
null. The density of the matrix is the number of non-zero elements divided by the total
number of matrix elements. The matrices with very low density are often good for use
of the sparse format. For example,

0 0 0 5
2 0 0
A
1 3 0 0

0 4 0

As far as the storage of a sparse matrix is concerned, storing of null elements is


nothing but wastage of memory. So we should devise technique such that only non-null
elements will be stored. The matrix A produces:

(3, 1) 1
(2, 2) 2
S =(3, 2) 3
(4, 3) 4
(1, 4) 5
The printed output lists the non-zero elements of S, together with their row and column
indices. The elements are sorted by columns, reflecting the internal data structure.
In large number of applications, sparse matrices are involved. One approach is to use
the linked list.
The program to represent sparse matrix:

/* Check whether the given matrix is sparse matrix or not, if so then print in
alternative form for storage. */

# include <stdio.h>
# include <conio.h>

main()
{
int matrix[20][20], m, n, total_elements, total_zeros = 0, i, j;
clrscr();
printf("\n Enter Number of rows and columns: ");
scanf("%d %d",&m, &n); total_elements = m *
n;
printf("\n Enter data for sparse matrix: ");
for(i = 0; i < m ; i++)
{
for( j = 0; j < n ; j++)
{
scanf("%d", &matrix[i][j]);
if( matrix[i][j] == 0)
{
total_zeros++;
}
}
}
if(total_zeros > total_elements/2 )
{
printf("\n Given Matrix is Sparse Matrix..");
printf("\n The Representaion of Sparse Matrix is: \n");
printf("\n Row \t Col \t Value "); for(i = 0; i < m ;
i++)
{
for( j = 0; j < n ; j++)
{
if( matrix[i][j] != 0)
{
printf("\n %d \t %d \t %d",i,j,matrix[i][j]);
}
}
}
}
else
printf("\n Given Matrix is Not a Sparse Matrix..");
}
EXCERCISES

1. How many different binary trees can be made from three nodes that contain the
key value 1, 2, and 3?

2. a. Draw all the possible binary trees that have four leaves and all the nonleaf nodes
have no children.
b. Show what would be printed by each of the following.
An inorder traversal of the tree
A postorder traversal of the tree
A preorder traversal of the tree

3. a. Draw the binary search tree whose elements are inserted in the following order:
50 72 96 94 107 26 12 11 9 2 10 25 51 16 17 95

b. What is the height of the tree?


c. What nodes are on level?
d. Which levels have the maximum number of nodes that they could contain?
e. What is the maximum height of a binary search tree containing these nodes?
Draw such a tree?
f. What is the minimum height of a binary search tree containing these nodes?
Draw such a tree?

g. Show how the tree would look after the deletion of 29, 59 and 47?

h. Show how the (original) tree would look after the insertion of nodes containing
63, 77, 76, 48, 9 and 10 (in that order).

4.

5. nodes in a binary tree.

6.

7.
binary tree. The maximum number of nodes in any level of a binary tree is also
called the width of the tree.

8. Construct two binary trees so that their postorder traversal sequences are the
same.

9.

10.

11. Prove that every node in a tree except the root node has a unique parent.

12.
traversal sequences.

13. Prove that the inorder and postorder traversal sequences of a binary tree
uniquely characterize
tree from its postorder and inorder traversal sequences.
14. Build the binary tree from the given traversal techniques:

A. Inorder:
Preorder:

B. Inorder:
Postorder:

C. Inorder:
Level order:

15. Build the binary tree from the given traversal techniques:

A. Inorder:
Preorder:

B. Inorder:
Postorder:

C. Inorder:
Level order:

16. Build the binary tree for the given inorder and preorder traversals:

Inorder: EACKFHDBG
Preorder: FAEKCDHGB

17. Convert the following general tree represented as a binary tree:

1 7 10

12 15 13 14 8

11 4

5 9 2 6

16 17
Multiple Choice Questions

The node that has no children is referred as: [ ]


A. Parent node C. Leaf node
B. Root node D. Sibblings

A binary tree in which all the leaves are on the same level is called as: [ ]
A. Complete binary tree C. Strictly binary tree
B. Full binary tree D. Binary search tree

How can the graphs be represented? [ ]


A. Adjacency matrix
B. Adjacency list
C. Incidence matrix
D. All of the above

The children of a same parent node are called as: [ ]


A. adjacent node C. Sibblings
B. non-leaf node D. leaf node

A tree with n vertices, consists of________ edges. [ ]


A. n 1 C. n
B. n - 2 D. log n

The maximum number of nodes at any level is: [ ]


A. n C. n + 1
B. 2n D. 2n

B C
FI GURE 1

D E F G

H I J K

For the Binary tree shown in fig. 1, the in-order traversal sequence is: [ ]
A. A B C D E F G H I J K C. H D I B E A F C J G K
B. H I D E B F J K G C A D. A B D H I E C F G J K

For the Binary tree shown in fig. 1, the pre-order traversal sequence is: [ ]
A. A B C D E F G H I J K C. H D I B E A F C J G K
B. H I D E B F J K G C A D. A B D H I E C F G J K

For the Binary tree shown in fig. 1, the post-order traversal sequence is: [ ]
A. A B C D E F G H I J K C. H D I B E A F C J G K
B. H I D E B F J K G C A D. A B D H I E C F G J K
20 Adjacency List
A B
A BCD
23 4 15
1 B ADE
36 9 C ADF
C D
ABCEFG
25 16
28 BDG
F G CDG
17
FDE

FIGURE 2 and its adjacency list

[ ]
to add edges to the minimum spanning tree for the figure 2 shown above:
A. (A, B) then (A, C) then (A, D) then (D, E) then (C, F) then (D, G)
B. (A, D) then (E, G) then (B, D) then (D, E) then (F, G) then (A, C)
C. both A and B
D. none of the above

11. For the figure 2 shown above, the cost of the minimal spanning tree is: [ ]
A. 57
B. 68

14

FIGURE 3
2 11

1 3 10 30

7 40

12. For the figure 3, how many leaves does it have? [ ]


A. 2 C. 6
B. 4 D. 8

13. For the figure 3, how many of the nodes have at least one sibling? [ ]
A. 5 C. 7
B. 6 D. 8

14. For the figure 3, How many descendants does the root have? [ ]
A. 0 C. 4
B. 2 D. 8

15. For the figure 3, what is the depth of the tree? [ ]


A. 2 C. 4
B. 3 D. 8

16. For the figure 3, which statement is correct? [ ]


A. The tree is neither complete nor full.
B. The tree is complete but not full.
C. The tree is full but not complete.
D. The tree is both full and complete.
There is a tree in the box at the top of this section. What is the order of ]
nodes visited using a pre-order traversal?
A. 1 2 3 7 10 11 14 30 40 C. 1 3 2 7 10 40 30 11 14
B. 1 2 3 14 7 10 11 40 30 D. 14 2 1 3 11 10 7 30 40

There is a tree in the box at the top of this section. What is the order of ]
nodes visited using an in-order traversal?
A. 1 2 3 7 10 11 14 30 40 C. 1 3 2 7 10 40 30 11 14
B. 1 2 3 14 7 10 11 40 30 D. 14 2 1 3 11 10 7 30 40

There is a tree in the box at the top of this section. What is the order of ]
nodes visited using a post-order traversal?
A. 1 2 3 7 10 11 14 30 40 C. 1 3 2 7 10 40 30 11 14
B. 1 2 3 14 7 10 11 40 30 D. 14 2 1 3 11 10 7 30 40

What is the minimum number of nodes in a full binary tree with depth 3? [ ]
A. 3 C. 8
B. 4 D. 15

Select the one true statement. [ C


A. Every binary tree is either complete or full.
B. Every complete binary tree is also a full binary tree.
C. Every full binary tree is also a complete binary tree.
D. No binary tree is both complete and full.

22. Suppose T is a binary tree with 14 nodes. What is the minimum possible depth of T?
A. 0 C. 4
B. 3 D. 5

23. Select the one FALSE statement about binary trees: [ ]


A. Every binary tree has at least one node.
B. Every non-empty tree has exactly one root node.
C. Every node has at most two children.
D. Every non-root node has exactly one parent.

24. Consider the node of a complete binary tree whose value is stored in data[i] for an
array implementation. If this node has a right child, where
will the right child's value be stored? [ ]
A. data[i+1]
B. data[i+2]

14

2 16 Figure 4

1 5

4
25. For the binary search tree shown in figure 4, Suppose we remove the root, [ ]
replacing it with something from the left subtree. What will be the new

root?
A. 1 D. 5
B. 2
C. 4

B C G

Tree 2 F
D

E C
E F

I D
H

J H B

Tree 1
J

26. Which traversals of tree 1 and tree 2, will produce the same sequence of [ ]
node names?
A. Preorder, Postorder C. Postorder, Inorder
B. Postorder, Postorder D. Inorder, Inorder

27. Which among the following is not a binary search tree? [ ]

A. 5 C.
5

3 4 7

2 6
3 6

B. 5 D.
14

3 2 16

7 6
1 5

4
23

11 27

7 17 25

6 9 14 FI GURE 5

28. For the binary search tree shown in figure 5, after deleting 23 from the [ ]
binary search tree what node will be at the root?
A. 11 C. 27
B. 25 D. 14

29. For the binary search tree shown in figure 5, after deleting 23 from the [ ]
binary search tree what parent child pair does not occur in the tree?
A. 25 27 C. 11 7
B. 27 11 D. 7 9
The number of nodes in a complete binary tree of depth d is: [ ]
A. 2d C. 2k
B. 2k - 1 D. none of the above
The depth of a complete binary tree with n nodes is: [ ]
A. log n C. log2 n + 1
B. n2 D. 2n

If the inorder and preorder traversal of a binary tree are D, B, F, E, G, H, A, [ ]


C and A, B, D, E, F, G, H, C respectively then, the postorder traversal of
that tree is:
A. D, F, H, G, E, B, C, A C. F, H, D, G, E, B, C, A
B. D, F, G, A, B, C, H, E D. D, F, H, G, E, B, C, A

The data structure used by level order traversal of binary tree is: [ ]
A. Queue C. linked list
B. Stack D. none of the above
Chapter
6
Graphs
6.1. Introduction to Graphs:

Graph G is a pair (V, E), where V is a finite set of vertices and E is a finite set of edges.
We will often denote n = |V|, e = |E|.

A graph is generally displayed as figure 6.5.1, in which the vertices are represented by
circles and the edges by lines.

An edge with an orientation (i.e., arrow head) is a directed edge, while an edge with no
orientation is our undirected edge.

If all the edges in a graph are undirected, then the graph is an undirected graph. The
graph in figure 6.5.1(a) is an undirected graph. If all the edges are directed; then the
graph is a directed graph. The graph of figure 6.5.1(b) is a directed graph. A directed
graph is also called as digraph. A graph G is connected if and only if there is a simple
path between any two nodes in G.

A graph G is said to be complete if every node a in G is adjacent to every other node v


in G. A complete graph with n nodes will have n(n-1)/2 edges. For example, Figure
6.5.1.(a) and figure 6.5.1.(d) are complete graphs.

A directed graph G is said to be connected, or strongly connected, if for each pair (u, v)
for nodes in G there is a path from u to v and also a path from v to u. On the other
hand, G is said to be unilaterally connected if for any pair (u, v) of nodes in G there is a
path from u to v or a path from v to u. For example, the digraph shown in figure 6.5.1
(e) is strongly connected.

B D v1
A B

E
A C E G v2
C D

(a) F (b) v3

v1 v1 v1 v1

v2 v3
v4 v2 v4 v2 v4 v2

(d) (f) (g)


v3 v3 v3 v4 v5 v6

Figure 6.5.1 Various Graphs

We can assign weight function to the edges: wG(e) is a weight of edge e E. The graph
which has such function assigned is called weighted graph.
The number of incoming edges to a vertex v is called in degree of the vertex (denote
indeg(v)). The number of outgoing edges from a vertex is called out-degree (denote
outdeg(v)). For example, let us consider the digraph shown in figure 6.5.1(f),

indegree(v1) = 2 outdegree(v1) = 1
indegree(v2) = 2 outdegree(v2) = 0

A path is a sequence of vertices (v 1, v2, . . . . . , vk), where for all i, (vi, vi+1) E. A path is
simple if all vertices in the path are distinct. If there is a path containing one or more
edges which starts from a vertex Vi and terminates into the same vertex then the path
is known as a cycle. For example, there is a cycle in figure 6.5.1(a), figure 6.5.1(c) and
figure 6.5.1(d).

If a graph (digraph) does not have any cycle then it is called acyclic graph. For
example, the graphs of figure 6.5.1 (f) and figure 6.5.1 (g) are acyclic graphs.

, E ) is a sub- E.

A Forest is a set of disjoint trees. If we remove the root node of a given tree then it
becomes forest. The following figure shows a forest F that consists of three trees T1, T2
and T3.

A X

B D Y
Q R

Z
T1 C E F

A Forest F

A graph that has either self loop or parallel edges or both is called multi-graph.

Tree is a connected acyclic graph


in a loop). A spanning tree of a graph G = (V, E) is a tree that contains all vertices of V
and is a subgraph of G. A single graph can have multiple spanning trees.

Let T be a spanning tree of a graph G. Then

1. Any two vertices in T are connected by a unique simple path.

2. If any edge is removed from T, then T becomes disconnected.

3. If we add any edge into T, then the new graph will contain a cycle.

4. Number of edges in T is n-1.


6.2. Representation of Graphs:

There are two ways of representing digraphs. They are:

Adjacency matrix.
Adjacency List.
Incidence matrix.

Adjacency matrix:

In this representation, the adjacency matrix of a graph G is a two dimensional n x n


matrix, say A = (ai,j), where

a 1
i, j
0 Otherwise
The matrix is symmetric in case of undirected graph, while it may be asymmetric if the
graph is directed. This matrix is also called as Boolean matrix or bit matrix.

1 1
1 0 1 1
2 3
G1: 2 0 0 1
3 0 0 0
4 0 0 0
(a) 4 5 (b)
5 0 0 1

Figure 6.5.2. A graph and its Adjacency matrix

Figure 6.5.2(b) shows the adjacency matrix representation of the graph G1 shown in
figure 6.5.2(a). The adjacency matrix is also useful to store multigraph as well as
weighted graph. In case of multigraph representation, instead of entry 0 or 1, the entry
will be between number of edges between two vertices.

In case of weighted graph, the entries are weights of the edges between the vertices.
The adjacency matrix for a weighted graph is called as cost adjacency matrix. Figure
6.5.3(b) shows the cost adjacency matrix representation of the graph G2 shown in
figure 6.5.3(a).

4
B D A B C D E F G
3 A 0 3 6
2 2
B 3 0 2 4
4 1
A C E G C 6 2 0 1 4 2
6
2 D 4 1 0
2
(a) (b)
E 4 2
F F 2
G 4

Figure 6.5.3 Weighted graph and its Cost adjacency matrix

Adjacency List:
In this representation, the n rows of the adjacency matrix are represented as n linked
lists. An array Adj[1, 2, . . . . . n] of pointers where for 1 < v < n, Adj[v] points to a
linked list containing the vertices which are adjacent to v (i.e. the vertices that can be
reached from v by a single edge). If the edges have weights then these weights may
also be stored in the linked list elements. For the graph G in figure 6.5.4(a), the
adjacency list in shown in figure 6.5.4 (b).

1 2 3
1
1 1 1 1 1 2 3

2 0 0 1 2 3

3 0 1 0 3 2

(a) (b)

Figure 6.5.4 Adjacency matrix and adjacency list

Incidence Matrix:

In this representation, if G is a graph with n vertices, e edges and no self loops, then
incidence matrix A is defined as an n by e matrix, say A = (a i,j), where

a 1 if there is an edge j incident to vi


i, j

0 Otherwise

Here, n rows correspond to n vertices and e columns correspond to e edges. Such a


matrix is called as vertex-edge incidence matrix or simply incidence matrix.
6.3. Minimum Spanning Tree (MST):

A spanning tree for a connected graph is a tree whose vertex set is the same as the
vertex set of the given graph, and whose edge set is a subset of the edge set of the
given graph. i.e., any connected graph will have a spanning tree.

Weight of a spanning tree w(T) is the sum of weights of all edges in T. Minimum
spanning tree (MST) is a spanning tree with the smallest possible weight.

Example:

G:

A graph G:
Three ( of many possible) spanning trees from graph G:

2 2
4
G: 3 5 3
6

1 1

A weighted graph G: The minimal spanning tree from weighted graph G:

Let's consider a couple of real-world examples on minimum spanning tree:

One practical application of a MST would be in the design of a network. For


instance, a group of individuals, who are separated by varying distances,
wish to be connected together in a telephone network. Although MST cannot
do anything about the distance from one connection to another, it can be
used to determine the least cost paths with no cycles in this network,
thereby connecting everyone at a minimum cost.

Another useful application of MST would be finding airline routes. The


vertices of the graph would represent cities, and the edges would represent
routes between the cities. MST can be applied to optimize airline routes by
finding the least costly paths with no cycles.

Minimum spanning tree, can be constructed using any of the following two algorithms:

1.
2.

Both algorithms differ in their methodology, but both eventually end up with the MST.
in
determining the MST. In
whereas in .
6.3.1.

This is a greedy algorithm. A greedy algorithm chooses some local optimum (i.e.
picking an edge with the least weight in a MST).

Kruskal's algorithm works as follows: Take a graph with 'n' vertices, keep on adding the
shortest (least cost) edge, while avoiding the creation of cycles, until (n - 1) edges
have been added. Sometimes two or more edges may have the same cost.

may result, but they will all have the same total cost, which will always be the
minimum cost.

1. Make the tree T empty.

2. Repeat the steps 3, 4 and 5 as long as T contains less than n - 1 edges and E is
not empty otherwise, proceed to step 6.

3. Choose an edge (v, w) from E of lowest cost.


4. Delete (v, w) from E.

5. If (v, w) does not create a cycle in T

then Add (v, w) to T


else discard (v, w)

6. If T contains fewer than n - 1 edges then print no spanning tree.

Example 1:

Construct the minimal spanning tree for the graph shown below:

10
1 2 50
45 40 3
35
4 25 5
55
15
6

Arrange all the edges in the increasing order of their costs:

Cost 10 15 20 25 30 35 40 45 50 55

Edge (1, 2) (3, 6) (4, 6) (2, 6) (1, 4) (3, 5) (2, 5) (1, 5) (2, 3) (5, 6)
The stages in

EDGE COST
ALGORITHM

(1, 2) 10 The edge between vertices 1 and 2 is


the first edge selected. It is included in
1 2
the spanning tree.
3

4
5
6

(3, 6) 15 Next, the edge between vertices 3 and 6


is selected and included in the tree.
1 2
3

4
5
6

(4, 6) 20 The edge between vertices 4 and 6 is


1 2
next included in the tree.
3

4
5
6

25 1 2
The edge between vertices 2 and 6 is
considered next and included in the
3
tree.
4
5
6

(1, 4) 30 Reject The edge between the vertices 1 and 4


is discarded as its inclusion creates a
cycle.

(3, 5) 35 Finally, the edge between vertices 3 and


1 2
3 5 is considered and included in the tree
built. This completes the tree.
4
The cost of the minimal spanning tree is
6 105.
Example 2:

Construct the minimal spanning tree for the graph shown below:

1 28
10
2
14
6 16
7
24 3
25
5 18
12
22 4

Solution:

Arrange all the edges in the increasing order of their costs:

Cost 10 12 14 16 18 22 24 25 28
Edge (1, 6) (3, 4) (2, 7) (2, 3) (4, 7) (4, 5) (5, 7) (5, 6) (1, 2)

EDGE COST
ALGORITHM

10 1 The edge between vertices 1 and 6 is


2 the first edge selected. It is included in
the spanning tree.
6
3
7

5
4

12 1 Next, the edge between vertices 3 and 4


2 is selected and included in the tree.
6
3
7

5
4

14 1 The edge between vertices 2 and 7 is


2 next included in the tree.
6
3
7

5
4
(2, 3) 16 1 The edge between vertices 2 and 3 is
2 next included in the tree.
6
3
7

5
4

The edge between the vertices 4 and 7


(4, 7) 18 Reject is discarded as its inclusion creates a
cycle.

(4, 5) 22 1 The edge between vertices 4 and 7 is


2 considered next and included in the
tree.
6
3
7

5
4

(5, 7) 24 Reject The edge between the vertices 5 and 7


is discarded as its inclusion creates a
cycle.

(5, 6) 25 1 Finally, the edge between vertices 5 and


2 6 is considered and included in the tree
built. This completes the tree.
6
3
7 The cost of the minimal spanning tree is
5
99.
4

6.3.2. MINIMUM-COST SPANNING TREES: PRIM'S ALGORITHM

A given graph can have many spanning trees. From these many spanning trees, we
have to select a cheapest one. This tree is called as minimal cost spanning tree.

Minimal cost spanning tree is a connected undirected graph G in which each edge is
labeled with a number (edge labels may signify lengths, weights other than costs).
Minimal cost spanning tree is a spanning tree for which the sum of the edge labels is as
small as possible

The slight modification of the spanning tree algorithm yields a very simple algorithm for
finding an MST. In the spanning tree algorithm, any vertex not in the tree but
connected to it by an edge can be added. To find a Minimal cost spanning tree, we
must be selective - we must always add a new vertex for which the cost of the new
edge is as small as possible.

This simple modified algorithm of spanning tree is called prim's algorithm for finding an
Minimal cost spanning tree. Prim's algorithm is an example of a greedy algorithm.
E is the set of edges in G. cost [1:n, 1:n] is the cost adjacency matrix of an n vertex
graph such that cost [i, j] is either a positive real number or if no edge (i, j) exists. A
minimum spanning tree is computed and stored as a set of edges in the array t [1:n-1,
1:2]. (t [i, 1], t [i, 2]) is an edge in the minimum-cost spanning tree. The final cost is
returned.

Algorithm Prim (E, cost, n, t)


{
Let (k, l) be an edge of minimum cost in E;
mincost := cost [k, l];
t [1, 1] := k; t [1, 2] := l;
for i :=1 to n do // Initialize near
if (cost [i, l] < cost [i, k]) then near [i] := l;
else near [i] := k;
near [k] :=near [l] := 0;
for i:=2 to n - 1 do // Find n - 2 additional edges for t.
{
Let j be an index such that near [j] 0 and
cost [j, near [j]] is minimum;
t [i, 1] := j; t [i, 2] := near [j];
mincost := mincost + cost [j, near [j]];
near [j] := 0
for k:= 1 to n do // Update near[].
if ((near [k] 0) and (cost [k, near [k]] > cost [k, j]))
then near [k] := j;
}
return mincost;
}

Prim, is a greedy algorithm that finds a minimum spanning tree for a connected
weighted graph. It finds a tree of that graph which includes every vertex and the total
weight of all the edges in the tree is less than or equal to every possible spanning tree.

Algorithm
Initialize the minimal spanning tree with a single vertex, randomly chosen from
the graph.
Repeat steps 3 and 4 until all the vertices are included in the tree.
Select an edge that connects the tree with a vertex not yet in the tree, so that the
weight of the edge is minimal and inclusion of the edge does not form a cycle.
Add the selected edge and the vertex that it connects to the tree.
Problem

algorithm.

Solution
proceed.
6.4. Reachability Matrix :

need to know the lengths of the edges in the given directed graph. This information is

-existence.

All Pairs Rechability


Adjacency Matrix m
Matrix

It begins with the adjacency matrix for the given graph, which is called A 0, and
1, A2, . . . . . , An and
then stops.

i contains information about the existence of i


paths. A one entry in the matrix A i will correspond to the existence of i paths and zero
entry will correspond to non-existence. Thus when the algorithm stops, the final matrix
An, contains the desired connectivity information.

A one entry indicates a pair of vertices, which are connected and zero entry indicates a
pair, which are not. This matrix is called a reachability matrix or path matrix for the
graph. It is also called the transitive closure of the original adjacency matrix.

The update rule for computing A i from Ai-1

Ai [x, y] = Ai-1 [x, y] (Ai-1 [x, i] Ai-1 [i, y]) ---- (1)

Floyd-Warshall Algorithm is an algorithm for finding the shortest path between all the pairs of vertices in a
weighted graph. This algorithm works for both the directed and undirected weighted graphs. But, it does not
work for the graphs with negative cycles (where the sum of the edges in a cycle is negative).
A weighted graph is a graph in which each edge has a numerical value associated with it.
Floyd-Warhshall algorithm is also called as Floyd's algorithm, Roy-Floyd algorithm, Roy-Warshall algorithm or
WFI algorithm.
This algorithm follows the dynamic programming approach to find the shortest paths.

How Floyd-Warshall Algorithm Works?


Let the given graph be:
Follow the steps below to find the shortest path between all the pairs of vertices.
1. Create a matrix A1 of dimension n*n where n is the number of vertices. The row and the column are
indexed as i and j respectively. i and j are the vertices of the graph.

Each cell A[i][j] is filled with the distance from the ith vertex to the jth vertex. If there is no path
from ith vertex to jth vertex, the cell is left as infinity.

2. Now, create a matrix A1 using matrix A0. The elements in the first column and the first row are left as
they are. The remaining cells are filled in the following way.

Let k be the intermediate vertex in the shortest path from source to destination. In this step, k is the first
vertex.A[i][j] is filled with (A[i][k] + A[k][j]) if (A[i][j] > A[i][k] + A[k][j]).

That is, if the direct distance from the source to the destination is greater than the path through the
vertex k, then the cell is filled with A[i][k] + A[k][j].

In this step, k is vertex 1. We cacluate the distance from source vertex to destination vertex through this
vertex k.

For example: For A1[2, 4], the direct distance from vertex 2 to 4 is 4 and the sum of the distance from
vertex 2 to 4 through vertex (ie. from vertex 2 to 1 and from vertex 1 to 4) is 7. Since 4 < 7, A0[2, 4] is
filled with 4.

3. In a similar way, A2 is created using A3. The elements in the second column and the second row are left
as they are.

In this step, k is the second vertex (i.e. vertex 2). The remaining steps are the same as in step 2.
4. Similarly, A3 and A4 is also created.

5. A4 gives the shortest path between each pair of vertices.

Floyd-Warshall Algorithm
n = no of vertices
A = matrix of dimension n*n
for k = 1 to n
for i = 1 to n
for j = 1 to n
Ak[i, j] = min (Ak-1[i, j], Ak-1[i, k] + Ak-1[k, j])
return A
6.5. Traversing a Graph

Many graph algorithms require one to systematically examine the nodes and edges of a
graph G. There are two standard ways to do this. They are:

Breadth first traversal (BFT)


Depth first traversal (DFT)
The BFT will use a queue as an auxiliary structure to hold nodes for future processing
and the DFT will use a STACK.

During the execution of these algorithms, each node N of G will be in one of three
states, called the status of N, as follows:

1. STATUS = 1 (Ready state): The initial state of the node N.

2. STATUS = 2 (Waiting state): The node N is on the QUEUE or STACK, waiting to


be processed.

3. STATUS = 3 (Processed state): The node N has been processed.

Both BFS and DFS impose a tree (the BFS/DFS tree) on the structure of graph. So, we
can compute a spanning tree in a graph. The computed spanning tree is not a minimum
spanning tree. The spanning trees obtained using depth first search are called depth
first spanning trees. The spanning trees obtained using breadth first search are called
Breadth first spanning trees.

6.5.1. Breadth first search and traversal:

The general idea behind a breadth first traversal beginning at a starting node A is as
follows. First we examine the starting node A. Then we examine all the neighbors of A.
Then we examine all the neighbors of neighbors of A. And so on. We need to keep track
of the neighbors of a node, and we need to guarantee that no node is processed more
than once. This is accomplished by using a QUEUE to hold nodes that are waiting to be
processed, and by using a field STATUS that tells us the current status of any node.
The spanning trees obtained using BFS are called Breadth first spanning trees.

Breadth first traversal algorithm on graph G is as follows:

This algorithm executes a BFT on graph G beginning at a starting node A.

Initialize all nodes to the ready state (STATUS = 1).

1. Put the starting node A in QUEUE and change its status to the waiting
state (STATUS = 2).

2. Repeat the following steps until QUEUE is empty:

a. Remove the front node N of QUEUE. Process N and change the


status of N to the processed state (STATUS = 3).

b. Add to the rear of QUEUE all the neighbors of N that are in the
ready state (STATUS = 1), and change their status to the waiting
state (STATUS = 2).

3. Exit.
6.5.2. Depth first search and traversal:

Depth first search of undirected graph proceeds as follows: First we examine the
starting node V. Next an unvisited vertex 'W' adjacent to 'V' is selected and a depth
first search from 'W' is initiated. When a vertex 'U' is reached such that all its adjacent
vertices have been visited, we back up to the last vertex visited, which has an unvisited
vertex 'W' adjacent to it and initiate a depth first search from W. The search terminates
when no unvisited vertex can be reached from any of the visited ones.

This algorithm is similar to the inorder traversal of binary tree. DFT algorithm is similar
to BFT except now use a STACK instead of the QUEUE. Again field STATUS is used to
tell us the current status of a node.

The algorithm for depth first traversal on a graph G is as follows.

This algorithm executes a DFT on graph G beginning at a starting node A.

1. Initialize all nodes to the ready state (STATUS = 1).

2. Push the starting node A into STACK and change its status to the waiting state
(STATUS = 2).

3. Repeat the following steps until STACK is empty:

a. Pop the top node N from STACK. Process N and change the status of N to
the processed state (STATUS = 3).

b. Push all the neighbors of N that are in the ready state (STATUS = 1), and
change their status to the waiting state (STATUS = 2).
4. Exit.

Example 1:

Consider the graph shown below. Traverse the graph shown below in breadth first
order and depth first order.

A Node Adjacency List


A F, C, B
B A, C, G
F C B
C A, B, D, E, F, G
D C, F, E, J
D E G E C, D, G, J, K
F A, C, D
J K G B, C, E, K
J D, E, K
A Graph G
K E, G, J
Adjacency list for graph G
Breadth-first search and traversal:

The steps involved in breadth first traversal are as follows:

QUEUE Processed Nodes


A B C D F G J K
1 1 1 1 1 1 1 1
A 2 1 1 1 1 1 1 1
A FCB A 3 2 2 1 2 1 1 1
F CBD AF 3 2 2 2 3 1 1 1
BDEG AFC 3 2 3 2 3 2 1 1
B DEG AFCB 3 3 3 2 3 2 1 1
D EGJ AFCBD 3 3 3 3 3 2 2 1
GJK AFCBDE 3 3 3 3 3 2 2 2
G JK AFCBDEG 3 3 3 3 3 3 2 2
K AFCBDEGJ 3 3 3 3 3 3 3 2
K EMPTY AFCBDEGJK 3 3 3 3 3 3 3 3

For the above graph the breadth first traversal sequence is: A F C B D E G J K.

Depth-first search and traversal:

The steps involved in depth first traversal are as follows:

Stack Processed Nodes


A B C D F G J K
1 1 1 1 1 1 1 1
A 2 1 1 1 1 1 1 1
A BCF A 3 2 2 1 2 1 1 1
F BCD AF 3 2 2 2 3 1 1 1
D BCEJ AFD 3 2 2 3 3 1 2 1
BCEK AFDJ 3 2 2 3 3 1 3 2
K BCEG AFDJK 3 2 2 3 3 2 3 3
G BCE AFDJKG 3 2 2 3 3 3 3 3
BC AFDJKGE 3 2 2 3 3 3 3 3
B AFDJKGEC 3 2 3 3 3 3 3 3
B EMPTY AFDJKGECB 3 3 3 3 3 3 3 3

For the above graph the depth first traversal sequence is: A F D J K G E C B.
Example 2:

Traverse the graph shown below in breadth first order, depth first order and construct
the breadth first and depth first spanning trees.
H Adjacency List
A F, B, C, G
B C G B A
A, G
E, F
J K E G, D, F
D
A, E, D
E A, L, E, H, J, C
L M
G, I
The Graph G I H
J G, L, K, M
K J
G, J, M
L,J
The adjacency list for the graph G

If the depth first traversal is initiated from vertex A, then the vertices of graph G are
visited in the order: A F E G L J K M H I C D B. The depth first spanning tree is shown
in the figure given below:

F B

G D

L H C

J I

K M

Depth first Traversal

If the breadth first traversal is initiated from vertex A, then the vertices of graph G are
visited in the order: A F B C G E D L H J M I K. The breadth first spanning tree is
shown in the figure given below:

F B C G

E D L H J

M I K

Breadth first traversal


Example 3:

Traverse the graph shown below in breadth first order, depth first order and construct
the breadth first and depth first spanning trees.

2 3

4 5 6 7

1 2 3

2 1 4 5

3 1 6 7

4 2 8

5 2 8

6 3 8

7 3 8

4 5 6 7

Adjacency list for G

Depth first search and traversal:

If the depth first is initiated from vertex 1, then the vertices of graph G are visited in
the order: 1, 2, 4, 8, 5, 6, 3, 7. The depth first spanning tree is as follows:

2 3

4 5 6 7

Depth First Spanning Tree


Breadth first search and traversal:

If the breadth first search is initiated from vertex 1, then the vertices of G are visited in
the order: 1, 2, 3, 4, 5, 6, 7, 8. The breadth first spanning tree is as follows:

2 3

4 5 6 7

Breadth First Spanning Tree

EXCERCISES

1. Show that the sum of degrees of all vertices in an undirected graph is twice the
number of edges.

2. Show that the number of vertices of odd degree in a finite graph is even.

3.
n-1
4. Show that the
1.

5. Prove that the edges explored by a breadth first or depth first traversal of a
connected graph from a tree.

6. Explain how existence of a cycle in an undirected graph may be detected by


traversing the graph in a depth first manner.

7.
adjacency matrix.

8. Give an example of a connected directed graph so that a depth first traversal of


that graph yields a forest and not a spanning tree of the graph.

9.
matrix representation of graphs.

10.
in a graph (i.e. to compute the transitive closure matrix of a graph)

11.
adjacency list.

12. Construct a weighted graph for which the minimal spanning trees produced by
m are different.
13. Describe the algorithm to find a minimum spanning tree T of a weighted graph G.
Find the minimum spanning tree T of the graph shown below.

6 5
A B C

1 8
4 2

D E
3

14. For the graph given below find the following:


a) Linked representation of the graph.
b) Adjacency list.
c) Depth first spanning tree.
d) Breadth first spanning tree.
e) Minimal spanning tree using

8 6

1 1 57

2 4 6 2 9

3 3 8 10

4 10 9 5

15. For the graph given below find the following:


f) Linked representation of the graph.
g) Adjacency list.
h) Depth first spanning tree.
i) Breadth first spanning tree.
j)

4
1

2 3 7 8

5 6

16. For the graph given below find the following:


k) Linked representation of the graph.
l) Adjacency list.
m) Depth first spanning tree.
n) Breadth first spanning tree.
o) Minimal

5
1

6
2 4

8
3
7
Multiple Choice Questions

1. How can the graphs be represented? [ ]


A. Adjacency matrix C. Incidence matrix
B. Adjacency list D. All of the above

2. The depth-first traversal in graph is analogous to tree traversal: [ ]


A. In-order C. Pre-order
D. Level order

3. The children of a same parent node are called as: [ ]


A. adjacent node C. Sibblings
B. non-leaf node D. leaf node

4. Complete graphs with n nodes will have__________ edges. [ ]


A. n - 1 C. n(n-1)/2
B. n/2 D. (n 1)/2

5. A graph with no cycle is called as: [ ]


A. Sub-graph C. Acyclic graph
B. Directed graph

6. The maximum number of nodes at any level is: [ ]


A. n C. n + 1
B. 2n D. 2n

20 Adjacency List
A B
A BCD
23 15 B ADE

36 9 C ADF
C D E
ABCEFG
25 16
28 BDG
3
CDG
F G
17 FDE

FIGURE 1 and its adjacency list


7. For the figure 1 shown above, the depth first spanning tree visiting [ ]
sequence is:
A. A B C D E F G C. A B C D E F G
B. A B D C F G E

8. For the figure 1 shown above, the breadth first spanning tree visiting [ ]
sequence is:
A. A B D C F G E C. A B C D E F G
B. A B C D E F G

9. [ ]
to add edges to the minimum spanning tree for the figure 1 shown
above:
A. (A, B) then (A, C) then (A, D) then (D, E) then (C, F) then (D, G)
B. (A, D) then (E, G) then (B, D) then (D, E) then (F, G) then (A, C)
C. both A and B

10. For the figure 1 shown above, the cost of the minimal spanning tree is: [ ]
A. 57 C. 48
B. 68 D. 32
A simple graph has no loops. What other property must a simple graph [ ]
have?
A. It must be directed. C. It must have at least one vertex.
B. It must be undirected. D. It must have no multiple edges.

Suppose you have a directed graph representing all the flights that an [ ]
airline flies. What algorithm might be used to find the best sequence of
connections from one city to another?
A. Breadth first search. C. A cycle-finding algorithm.
B. Depth first search. D. A shortest-path algorithm.

If G is an directed graph with 20 vertices, how many boolean values will [ ]


be needed to represent G using an adjacency matrix?
A. 20 C. 200
B. 40 D. 400

14. Which graph representation allows the most efficient determination of [ ]


the existence of a particular edge in a graph?
A. An adjacency matrix. C. Incidence matrix
B. Edge lists. D. none of the above

15. What graph traversal algorithm uses a queue to keep track of vertices [ ]
which need to be processed?
A. Breadth-first search. C Level order search
B. Depth-first search. D. none of the above

16. What graph traversal algorithm uses a stack to keep track of vertices [ ]
which need to be processed?
A. Breadth-first search. C Level order search
B. Depth-first search. D. none of the above

17. What is the expected number of operations needed to loop through all [ ]
the edges terminating at a particular vertex given an adjacency matrix
representation of the graph? (Assume n vertices are in the graph and m
edges terminate at the desired node.)
A. O(m) C. O(m²)
B. O(n) D. O(n²)

18. What is the expected number of operations needed to loop through all [ ]
the edges terminating at a particular vertex given an adjacency list
representation of the graph? (Assume n vertices are in the graph and m
edges terminate at the desired node.)
A. O(m) C. O(m²)
B. O(n) D. O(n²)
19. [ ]
3
A

2 1 5 5

B 3 G 4 E FIGURE 3

1 4 6 1

C F
3

minimum spanning tree algorithm to add edges to the minimum


spanning tree?
A. (A, G) then (G, C) then (C, B) then (C, F) then (F, E) then (E, D)
B. (A, G) then (A, B) then (B, C) then (A, D) then (C, F) then (F, E)
C. (A, G) then (B, C) then (E, F) then (A, B) then (C, F) then (D, E)
D. (A, G) then (A, B) then (A, C) then (A, D) then (A, D) then (C, F)

20. For the figure 3, w [ ]


tree algorithm to add edges to the minimum spanning tree?
A. (A, G) then (G, C) then (C, B) then (C, F) then (F, E) then (E, D)
B. (A, G) then (A, B) then (B, C) then (A, D) then (C, F) then (F, E)
C. (A, G) then (B, C) then (E, F) then (A, B) then (C, F) then (D, E)
D. (A, G) then (A, B) then (A, C) then (A, D) then (A, D) then (C, F)

21. Which algorithm does not construct an in-tree as part of its processing? [ ]

Algorithm
D. The Depth-First Search Trace Algorithm

22. The worst- -cost spanning tree [ ]


algorithm on a graph with n vertices and m edges is:
A. C.
B. D.

23. An adjacency matrix representation of a graph cannot contain [ ]


information of:
A. Nodes C. Direction of edges
B. Edges D. Parallel edges

A Adjacency List
A D
B D B AC
C GDF
G F
----
C E CD
EA
B

FIGURE 4 and its adjacency list

24. For the figure 4, which edge does not occur in the depth first spanning [ ]
tree resulting from depth first search starting at node B:
A. F E C. C G
B. E C D. C F

25. The set of all edges generated by DFS tree starting at node B is: [ ]
A. B A D C G F E C. B A C D G F E
B. A D D. Cannot be generated

26. The set of all edges generated by BFS tree starting at node B is: [ ]
A. B A D C G F E C. B A C D G F E
B. A D D. Cannot be generated
Chapter
7
Searching and Sorting
There are basically two aspects of computer programming. One is data
organization also commonly called as data structures. Till now we have seen
about data structures and the techniques and algorithms used to access
them. The other part of computer programming involves choosing the
appropriate algorithm to solve the problem. Data structures and algorithms
are linked each other. After developing programming techniques to represent
information, it is logical to proceed to manipulate it. This chapter introduces
this important aspect of problem solving.

Searching is used to find the location where an element is available. There are two
types of search techniques. They are:

1. Linear or sequential search


2. Binary search

Sorting allows an efficient arrangement of elements within a given data structure. It is


a way in which the elements are organized systematically for some purpose. For
example, a dictionary in which words is arranged in alphabetical order and telephone
director in which the subscriber names are listed in alphabetical order. There are many
sorting techniques out of which we study the following.

1. Bubble sort
2. Quick sort
3. Selection sort and
4. Heap sort

There are two types of sorting techniques:

1. Internal sorting
2. External sorting

If all the elements to be sorted are present in the main memory then such sorting is
called internal sorting on the other hand, if some of the elements to be sorted are
kept on the secondary storage, it is called external sorting. Here we study only
internal sorting techniques.

7.1. Linear Search:

This is the simplest of all searching techniques. In this technique, an ordered or


unordered list will be searched one by one from the beginning until the desired element
is found. If the desired element is found in the list then the search is successful
otherwise unsuccessful.
elements organized sequentially on a List. The number of
comparisons required to retrieve an element from the list, purely depends on where the
element is stored in the list. If it is the first element, one comparison will do; if it is
second element two comparisons are necessary and so on. On an average you need
search an element. If search is not successful, you would
comparisons.

The time complexity of linear search is O(n).

Algorithm:

linsrch(a[n], x)
{
index = 0;
flag = 0;
while (index < n) do
{
if (x == a[index])
{
flag = 1;
break;
}
index ++;
}
if(flag == 1)

else

Example 1:

Suppose we have the following unsorted list: 45, 39, 8, 54, 77, 38, 24, 16, 4, 7, 9, 20

If we are searching for:

at 4 elements before success

look at 10 elements before success

fore failure.
Example 2:

Let us illustrate linear search on the following 9 elements:

Index 0 1 2 3 4 5 6 7 8
Elements -15 -6 0 7 9 23 54 82 101

Searching different elements is as follows:

1. Searching for x = 7Search successful, data found at 3rd position.

2. Searching for x = 82Search successful, data found at 7th position.

3. Searching for x = 42Search un-successful, data not found.

7.1.1. A non-recursive program for Linear Search:

# include <stdio.h>
# include <conio.h>

main()
{
int number[25], n, data, i, flag = 0;
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter the elements: ");
for(i = 0; i < n; i++)
scanf("%d", &number[i]);
printf("\n Enter the element to be Searched: ");
scanf("%d", &data);
for( i = 0; i < n; i++)
{
if(number[i] == data)
{
flag = 1;
break;
}
}
if(flag == 1)
printf("\n Data found at location: %d", i+1);
else
printf("\n Data not found ");
}

7.1.2. A Recursive program for linear search:

# include <stdio.h>
# include <conio.h>

void linear_search(int a[], int data, int position, int n)


{
if(position < n)
{
if(a[position] == data)
printf("\n Data Found at %d ", position);
else
linear_search(a, data, position + 1, n);
}
else
printf("\n Data not found");
}

void main()
{
int a[25], i, n, data;
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter the elements: ");
for(i = 0; i < n; i++)
{
scanf("%d", &a[i]);
}
printf("\n Enter the element to be seached: ");
scanf("%d", &data);
linear_search(a, data, 0, n);
getch();
}

7.2. BINARY SEARCH

1 < x2 n. When we

successful search).

In Binary search we jump into the middle of the file, where we find key a[mid], and

s a[mid]. Similarly, if
a[mid] > x, then further search is only necessary in that part of the file which follows
a[mid].

If we use recursive procedure of finding the middle key a[mid] of the un-searched
portion of a file, then every un-successful comparis
roughly half the un-searched portion from consideration.

2n times before reaching a


trivial length, the worst case complexity of Binary search is about log 2n.

Algorithm:

Let array a[n] of elements in increasing order, n


and if so, set j such that x = a[j] else return 0.
binsrch(a[], n, x)
{
low = 1; high = n;
while (low < high) do
{
mid = (low + high)/2
if (x < a[mid])
high = mid 1;
else if (x > a[mid])
low = mid + 1;
else return mid;
}
return 0;
}

low and high


found or low is increased by at least one or high is decreased by at least one. Thus we
have two sequences of integers approaching each other and eventually low will become
greater than high

Example 1:

Let us illustrate binary search on the following 12 elements:

Index 1 2 3 4 5 6 7 8 9 10 11 12
Elements 4 7 8 9 16 20 24 38 39 45 54 77

If we are searching for x = 4: (This needs 3 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 1, high = 5, mid = 6/2 = 3, check 8
low = 1, high = 2, mid = 3/2 = 1, check 4, found

If we are searching for x = 7: (This needs 4 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 1, high = 5, mid = 6/2 = 3, check 8
low = 1, high = 2, mid = 3/2 = 1, check 4
low = 2, high = 2, mid = 4/2 = 2, check 7, found

If we are searching for x = 8: (This needs 2 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 1, high = 5, mid = 6/2 = 3, check 8, found

If we are searching for x = 9: (This needs 3 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 1, high = 5, mid = 6/2 = 3, check 8
low = 4, high = 5, mid = 9/2 = 4, check 9, found

If we are searching for x = 16: (This needs 4 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 1, high = 5, mid = 6/2 = 3, check 8
low = 4, high = 5, mid = 9/2 = 4, check 9
low = 5, high = 5, mid = 10/2 = 5, check 16, found

If we are searching for x = 20: (This needs 1 comparison)


low = 1, high = 12, mid = 13/2 = 6, check 20, found
If we are searching for x = 24: (This needs 3 comparisons)
low = 1, high = 12, mid = 13/2 = 6, check 20
low = 7, high = 12, mid = 19/2 = 9, check 39
low = 7, high = 8, mid = 15/2 = 7, check 24, found

If we are searching for x = 38: (This needs 4 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 7, high = 12, mid = 19/2 = 9, check 39
low = 7, high = 8, mid = 15/2 = 7, check 24
low = 8, high = 8, mid = 16/2 = 8, check 38, found

If we are searching for x = 39: (This needs 2 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 7, high = 12, mid = 19/2 = 9, check 39, found

If we are searching for x = 45: (This needs 4 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 7, high = 12, mid = 19/2 = 9, check 39
low = 10, high = 12, mid = 22/2 = 11, check 54
low = 10, high = 10, mid = 20/2 = 10, check 45, found

If we are searching for x = 54: (This needs 3 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 7, high = 12, mid = 19/2 = 9, check 39
low = 10, high = 12, mid = 22/2 = 11, check 54, found

If we are searching for x = 77: (This needs 4 comparisons)


low = 1, high = 12, mid = 13/2 = 6, check 20
low = 7, high = 12, mid = 19/2 = 9, check 39
low = 10, high = 12, mid = 22/2 = 11, check 54
low = 12, high = 12, mid = 24/2 = 12, check 77, found

The number of comparisons necessary by search element:

20 requires 1 comparison;
8 and 39 requires 2 comparisons;
4, 9, 24, 54 requires 3 comparisons and
7, 16, 38, 45, 77 requires 4 comparisons

Summing the comparisons, needed to find all twelve items and dividing by 12, yielding
37/12 or approximately 3.08 comparisons per successful search on the average.

Example 2:

Let us illustrate binary search on the following 9 elements:

Index 0 1 2 3 4 5 6 7 8
Elements -15 -6 0 7 9 23 54 82 101

Solution:

The number of comparisons required for searching different elements is as follows:


1. If we are searching for x = 101: (Number of comparisons = 4)
low high mid
1 9
6 9
8 9
9 9 9
found

2. Searching for x = 82: (Number of comparisons = 3)


high mid
1 9
6 9
8 9
found

3. Searching for x = 42: (Number of comparisons = 4)


high
1 9
6 9
6 6
7 6 not found

4. Searching for x = -14: (Number of comparisons = 3)


high mid
1 9 5
1 4 2
1 1 1
2 1 not found

Continuing in this manner the number of element comparisons needed to find each of
nine elements is:

1 3 4 5 6 9
-6 0 7 9 23 54
Comparisons 3 3 4 1 3 4

No element requires more than 4 comparisons to be found. Summing the comparisons


needed to find all nine items and dividing by 9, yielding 25/9 or approximately 2.77
comparisons per successful search on the average.

There are ten possible ways that an un-successful search may terminate depending
upon the value of x.

If x < a(1), a(1) < x < a(2), a(2) < x < a(3), a(5) < x < a(6), a(6) < x < a(7) or a(7)
< x < a(8) the algorithm requires 3 element comparisons
present. For all of the remaining possibilities BINSRCH requires 4 element comparisons.

Thus the average number of element comparisons for an unsuccessful search is:

(3 + 3 + 3 + 4 + 4 + 3 + 3 + 3 + 4 + 4) / 10 = 34/10 = 3.4

Time Complexity:

The time complexity of binary search in a successful search is O(log n) and for an
unsuccessful search is O(log n).
7.2.1. A non-recursive program for binary search:

# include <stdio.h>
# include <conio.h>

main()
{
int number[25], n, data, i, flag = 0, low, high, mid;
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter the elements in ascending order: ");
for(i = 0; i < n; i++)
scanf("%d", &number[i]);
printf("\n Enter the element to be searched: ");
scanf("%d", &data);
low = 0; high = n-1;
while(low <= high)
{
mid = (low + high)/2;
if(number[mid] == data)
{
flag = 1;
break;
}
else
{
if(data < number[mid])
high = mid - 1;
else
low = mid + 1;
}
}
if(flag == 1)
printf("\n Data found at location: %d", mid + 1);
else
printf("\n Data Not Found ");
}

7.2.2. A recursive program for binary search:

# include <stdio.h>
# include <conio.h>

void bin_search(int a[], int data, int low, int high)


{
int mid ;
if( low <= high)
{
mid = (low + high)/2;
if(a[mid] == data)
printf("\n Element found at location: %d ", mid + 1);
else
{
if(data < a[mid])
bin_search(a, data, low, mid-1);
else
bin_search(a, data, mid+1, high);
}
}
else
printf("\n Element not found");
}
void main()
{
int a[25], i, n, data;
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter the elements in ascending order: ");
for(i = 0; i < n; i++)
scanf("%d", &a[i]);
printf("\n Enter the element to be searched: ");
scanf("%d", &data);
bin_search(a, data, 0, n-1);
getch();
}

7.3. Bubble Sort:

The bubble sort is easy to understand and program. The basic idea of bubble sort is to
pass through the file sequentially several times. In each pass, we compare each
element in the file with its successor i.e., X[i] with X[i+1] and interchange two element
when they are not in proper order. We will illustrate this sorting technique by taking a
specific example. Bubble sort is also called as exchange sort.

Example:

Consider the array x[n] which is stored in memory as shown below:

X[0] X[2] X[4]


33 44 22 11 66 55

Suppose we want our array to be stored in ascending order. Then we pass through the
array 5 times as described below:

Pass 1: (first element is compared with all other elements).

We compare X[i] and X[i+1] for i = 0, 1, 2, 3, and 4, and interchange X[i] and X[i+1]
if X[i] > X[i+1]. The process is shown below:

X[1] X[3] Remarks


44 11 66 55
22
44
44 66
55 66
22 44 55 66

The biggest number 66 is moved to (bubbled up) the right most position in the array.
Pass 2: (second element is compared).

i.e., we compare X[i] with X[i+1] for i=0, 1, 2, and 3 and interchange X[i] and X[i+1]
if X[i] > X[i+1]. The process is shown below:

X[4] Remarks

33 22 11 55
22 33
11 33
33
55
22 11 33 55

The second biggest number 55 is moved now to X[4].

Pass 3: (third element is compared).

We repeat the same process, but this time we leave both X[4] and X[5]. By doing this,
we move the third biggest number 44 to X[3].

X[0] X[3] Remarks

22 11 33 44
11 22
22 33
33 44
11 22 33 44

Pass 4: (fourth element is compared).

We repeat the process leaving X[3], X[4], and X[5]. By doing this, we move the fourth
biggest number 33 to X[2].

Remarks

11 22
11 22
22

Pass 5: (fifth element is compared).

We repeat the process leaving X[2], X[3], X[4], and X[5]. By doing this, we move the
fifth biggest number 22 to X[1]. At this time, we will have the smallest number 11 in
X[0]. Thus, we see that we can sort the array of size 6 in 5 passes.

For an array of size n, we required (n-1) passes.


7.3.1. Program for Bubble Sort:

#include <stdio.h>
#include <conio.h>
void bubblesort(int x[], int n)
{
int i, j, temp;
for (i = 0; i < n; i++)
{
for (j = 0; j < n i-1 ; j++)
{
if (x[j] > x[j+1])
{
temp = x[j];
x[j] = x[j+1];
x[j+1] = temp;
}
}
}
}

main()
{
int i, n, x[25];
clrscr();
printf("\n Enter the number of elements: ");
scanf("%d", &n);
printf("\n Enter Data:");
for(i = 0; i < n ; i++)
scanf("%d", &x[i]);
bubblesort(x, n);
printf ("\n Array Elements after sorting: ");
for (i = 0; i < n; i++)
printf ("%5d", x[i]);
}

Time Complexity:

The bubble sort method of sorting an array of size n requires (n-1) passes and (n-1)
comparisons on each pass. Thus the total number of comparisons is (n-1) * (n-1) = n2
2n + 1, which is O(n2). Therefore bubble sort is very inefficient when there are more
elements to sorting.

7.4. Selection Sort:

Selection sort will not require no more than n-1 interchanges. Suppose x is an array of
size n stored in memory. The selection sort algorithm first selects the smallest element
in the array x and place it at array position 0; then it selects the next smallest element
in the array x and place it at array position 1. It simply continues this procedure until it
places the biggest element in the last position of the array.

The array is passed through (n-1) times and the smallest element is placed in its
respective position in the array as detailed below:
Pass 1: Find the location j of the smallest element in the array x [0], x[1], . . . . x[n-1],
and then interchange x[j] with x[0]. Then x[0] is sorted.

Pass 2: Leave the first element and find the location j of the smallest element in the
sub-array x[1], x[2], . . . . x[n-1], and then interchange x[1] with x[j]. Then
x[0], x[1] are sorted.

Pass 3: Leave the first two elements and find the location j of the smallest element in
the sub-array x[2], x[3], . . . . x[n-1], and then interchange x[2] with x[j].
Then x[0], x[1], x[2] are sorted.

Pass (n-1): Find the location j of the smaller of the elements x[n-2] and x[n-1], and
then interchange x[j] and x[n-2]. Then x[0], x[1], . . . . x[n-2] are sorted. Of
course, during this pass x[n-1] will be the biggest element and so the entire
array is sorted.

Time Complexity:

In general we prefer selection sort in case where the insertion sort or the bubble sort
requires exclusive swapping. In spite of superiority of the selection sort over bubble
sort and the insertion sort (there is significant decrease in run time), its efficiency is
also O(n2) for n data items.

Example:

Let us consider the following example with 9 elements to analyze selection Sort:

1 3 6 8

70 80 45 find the first smallest element

i j swap a[i] & a[j]

45 70 80 65

i j

45 50 65

45 50 55 75 65

i j

45 50 55 60 75 65

i j

45 50 55 60 65 75 70 Find the sixth smallest element

i j

45 50 55 60 65 70 75 80 Find the seventh smallest element

i j

45 50 55 60 65 70 75 80

i J

45 50 55 60 65 70 75 80 The outer loop ends.


7.4.1. Non-recursive Program for selection sort:

# include<stdio.h>
# include<conio.h>

void selectionSort( int low, int high );

int a[25];

int main()
{
int num, i= 0;
clrscr();
printf( "Enter the number of elements: " );
scanf("%d", &num);
printf( "\nEnter the elements:\n" );
for(i=0; i < num; i++)
scanf( "%d", &a[i] );
selectionSort( 0, num - 1 );
printf( "\nThe elements after sorting are: " );
for( i=0; i< num; i++ )
printf( "%d ", a[i] );
return 0;
}

void selectionSort( int low, int high )


{
int i=0, j=0, temp=0, minindex;
for( i=low; i <= high; i++ )
{
minindex = i;
for( j=i+1; j <= high; j++ )
{
if( a[j] < a[minindex] )
minindex = j;
}
temp = a[i];
a[i] = a[minindex];
a[minindex] = temp;
}
}

7.4.2. Recursive Program for selection sort:

#include <stdio.h>
#include<conio.h>

int x[6] = {77, 33, 44, 11, 66};


selectionSort(int);

main()
{
int i, n = 0;
clrscr();
printf (" Array Elements before sorting: ");
for (i=0; i<5; i++)
printf ("%d ", x[i]);
selectionSort(n); /* call selection sort */
printf ("\n Array Elements after sorting: ");
for (i=0; i<5; i++)
printf ("%d ", x[i]);
}

selectionSort( int n)
{
int k, p, temp, min;
if (n== 4)
return (-1);
min = x[n];
p = n;
for (k = n+1; k<5; k++)
{
if (x[k] <min)
{
min = x[k];
p = k;
}
}
temp = x[n]; /* interchange x[n] and x[p] */
x[n] = x[p];
x[p] = temp;
n++ ;
selectionSort(n);
}

7.5. Quick Sort:

the first most efficient sorting algorithms. It is an example of a class of algorithms that

The quick sort algorithm partitions the original array by rearranging it into two groups.
The first group contains those elements less than some arbitrary chosen value taken
from the set, and the second group contains those elements greater than or equal to
the chosen value. The chosen value is known as the pivot element. Once the array has
been rearranged in this way with respect to the pivot, the same partitioning procedure
is recursively applied to each of the two subsets. When all the subsets have been
partitioned and rearranged, the original array is sorted.

The function partition() makes use of two pointers up and down which are moved
toward each other in the following fashion:

1. >= pivot.
2.
3. If down > up, interchange a[down] with a[up]
4.
pivot is found and place
The program uses a recursive function quicksort(). The algorithm of quick sort function

1. It terminates when the condition low >= high is satisfied. This condition will
be satisfied only when the array is completely sorted.

2. Here we
calls the partition function to find the proper position j of the element x[low]
i.e. pivot. Then we will have two sub-arrays x[low], x[low+1], . . . . . . x[j-1]
and x[j+1], x[j+2], . . . x[high].

3. It calls itself recursively to sort the left sub-array x[low], x[low+1], . . . . . . .


x[j-1] between positions low and j-1 (where j is returned by the partition
function).

4. It calls itself recursively to sort the right sub-array x[j+1], x[j+2], . . x[high]
between positions j+1 and high.

The time complexity of quick sort algorithm is of O(n log n).

Algorithm

Sorts the elements a[p], . . . . . ,a[q] which reside in the global array a[n] into
ascending order. The a[n + 1] is considered to be defined and must be greater than all
elements in a[n]; a[n + 1] = +

quicksort (p, q)
{
if ( p < q ) then
{
call j = PARTITION(a, p, q+1); // j is the position of the partitioning element
call quicksort(p, j 1);
call quicksort(j + 1 , q);
}
}

partition(a, m, p)
{
v = a[m]; up = m; down = p;
do
{
repeat
up = up + 1;
until (a[up] > v);

repeat
down = down 1;
until (a[down] < v);
if (up < down) then call interchange(a, up,
down); } while (up > down);

a[m] = a[down];
a[down] = v;
return (down);
}
interchange(a, up, down)
{
p = a[up];
a[up] = a[down];
a[down] = p;
}

Example:

an element smaller than pivot. If such elements are found, the elements are swapped.

Let us consider the following example with 13 elements to analyze quick sort:

2 3 4 5 6 7 9 10 11 12 13 Remarks

08 24 02 58 04 70 45

pivot

pivot 04 79

pivot up down

pivot 57

pivot up
& down
08 38 57 58 79 70 45)
pivot down up
& down
08 24

& down
02 (08 04)

16

(06 08
& down

(04) 06
& down
04
pivot,
down,

16
pivot,

04 06 08 16 38
(56 57 79

pivot up
down
pivot 45 57

pivot
& down
79 57)
45

& down

79 57)
pivot up
57 79

58 79)
& down
57

79)
pivot,
& down
70
79
pivot,
down,

57 58 70 79)

02 04 06 08 16 24 38 45 57 70 79

7.5.1. Recursive program for Quick Sort:

# include<stdio.h>
# include<conio.h>

void quicksort(int, int);


int partition(int, int);
void interchange(int, int);
int array[25];

int main()
{
int num, i = 0;
clrscr();
printf( "Enter the number of elements: " );
scanf( "%d", &num);
printf( "Enter the elements: " );
for(i=0; i < num; i++)
scanf( "%d", &array[i] );
quicksort(0, num -1);
printf( "\nThe elements after sorting are: " );
for(i=0; i < num; i++)
printf("%d ", array[i]);
return 0;
}

void quicksort(int low, int high)


{
int pivotpos;
if( low < high )
{
pivotpos = partition(low, high + 1);
quicksort(low, pivotpos - 1);
quicksort(pivotpos + 1, high);
}
}

int partition(int low, int high)


{
int pivot = array[low];
int up = low, down = high;

do
{
do
up = up + 1;
while(array[up] < pivot );

do
down = down - 1;
while(array[down] > pivot);

if(up < down)


interchange(up, down);

} while(up < down);


array[low] = array[down];
array[down] = pivot;
return down;
}

void interchange(int i, int j)


{
int temp;
temp = array[i];
array[i] = array[j];
array[j] = temp;
}
Exercises

1.
time complexity.

2. Find the expected number of passes, comparisons and exchanges for


bubble
results with the actual number of operations when the given sequence is as
follows: 7, 1, 3, 4, 10, 9, 8, 6, 5, 2.

3.
arr

position of the first such element in the array.

4. -wise and column-wise. Assume


that the matrix is represented by a two dimensional array.

5. A very large array of elements is to be sorted. The program is to be run on


a personal computer with limited memory. Which sort would be a better
choice: Heap sort or Quick sort? Why?

6. Here is an array of ten integers: 5 3 8 9 1 7 0 2 6 4


Suppose we partition this array using quicksort's partition function and
using 5 for the pivot. Draw the resulting array after the partition finishes.

7. Here is an array which has just been partitioned by the first step of
quicksort: 3, 0, 2, 4, 5, 8, 7, 6, 9. Which of these elements could be the
pivot? (There may be more than one possibility!)

8. Show the result of inserting 10, 12, 1, 14, 6, 5, 8, 15, 3, 9, 7, 4, 11, 13,
and 2, one at a time, into an initially empty binary heap.

9. Sort the sequence 3, 1, 4, 5, 9, 2, 6, 5 using insertion sort.


Multiple Choice Questions

What is the worst-case time for serial search finding a single item in an [ ]
array?
A. Constant time C. Logarithmic time
B. Quadratic time D. Linear time

What is the worst-case time for binary search finding a single item in an [ ]
array?
A. Constant time C. Logarithmic time
B. Quadratic time D. Linear time

What additional requirement is placed on an array, so that binary search [ ]


may be used to locate an entry?
A. The array elements must form a heap.
B. The array must have at least 2 entries
C. The array must be sorted.
D. The array's size must be a power of two.

Which searching can be performed recursively ? [ ]


A. linear search C. Binary search
B. both D. none

Which searching can be performed iteratively ? [ ]


A. linear search C. Binary search
B. both D. none

In a selection sort of n elements, how many times is the swap function [ ]


called in the complete execution of the algorithm?
A. 1 C. n 1
B. n2 D. n log n
Selection sort and quick sort both fall into the same category of sorting [ ]
algorithms. What is this category?
A. O(n log n) sorts C. Divide-and-conquer sorts
B. Interchange sorts D. Average time is quadratic

Suppose that a selection sort of 100 items has completed 42 iterations of [ ]


the main loop. How many items are now guaranteed to be in their final spot
(never to be moved again)?
A. 21 C. 42
B. 41 D. 43

When is insertion sort a good choice for sorting an array? [ ]


A. Each component of the array requires a large amount of memory
B. The array has only a few items out of place
C. Each component of the array requires a small amount of memory
D. The processor speed is fast
What is the worst-case time for quick sort to sort an array of n elements? [ ]
A. O(log n) C. O(n log n)
B. O(n) D. O(n²)

Suppose we are sorting an array of eight integers using quick sort, and we [ ]
have just finished the first partitioning with the array looking like this:
2 5 1 7 9 12 11 10 Which statement is correct?
A. The pivot could be either the 7 or the 9.
B. The pivot is not the 7, but it could be the 9.
C. The pivot could be the 7, but it is not the 9.
D. Neither the 7 nor the 9 is the pivot

What is the worst-case time for heap sort to sort an array of n elements? [ ]
A. O(log n) C. O(n log n)
B. O(n) D. O(n²)

Suppose we are sorting an array of eight integers using heap sort, and we [ ]
have just finished one of the reheapifications downward. The array now
looks like this: 6 4 5 1 2 7 8
How many reheapifications downward have been performed so far?
A. 1 C. 2
B. 3 or 4 D. 5 or 6

Time complexity of inserting an element to a heap of n elements is of the [ ]


order of
A. log2 n C. n log2n
B. n2 D. n
A min heap is the tree structure where smallest element is available at the [ ]
A. leaf C. intermediate parent
B. root D. any where

In the quick sort method , a desirable choice for the portioning element will [ ]
be
A. first element of list C. median of list
B. last element of list D. any element of list

Quick sort is also known as [ ]


A. merge sort C. heap sort
B. bubble sort D. none

Which design algorithm technique is used for quick sort . [ ]


A. Divide and conqueror C. backtrack
B. greedy D. dynamic programming

Which among the following is fastest sorting technique (for unordered data) [ ]
A. Heap sort C. Quick Sort
B. Selection Sort D. Bubble sort

In which searching technique elements are eliminated by half in each pass . [ ]


A. Linear search C. Binary search
B. both D. none

Running time of Heap sort algorithm is -----. [ ]


A. O( log2 n) C. O(n)
B. A. O( n log2 n) D. O(n2)
Running time of Bubble sort algorithm is -----. [ ]
A. O( log2 n) C. O(n)
B. A. O( n log2 n) D. O(n2)

Running time of Selection sort algorithm is -----. [ ]


A. O( log2 n) C. O(n)
B. A. O( n log2 n) D. O(n2)

The Max heap constructed from the list of numbers 30,10,80,60,15,55 is [ ]


A. 60,80,55,30,10,15 C. 80,55,60,15,10,30
B. 80,60,55,30,10,15 D. none

The number of swappings needed to sort the numbers 8,22,7,9,31,19,5,13 [ ]


in ascending order using bubble sort is
A. 11 C. 13
B. 12 D. 14

Time complexity of insertion sort algorithm in best case is [ ]


A. O( log2 n) C. O(n)
B. A. O( n log2 n) D. O(n2)

Binary search algorithm performs efficiently on a [ ]


A. linked list C. array
B. both D. none

Which is a stable sort ? [ ]


A. Bubble sort C. Quick sort
B. Selection Sort D. none

Heap is a good data structure to implement [ ]


A. priority Queue C. linear queue
B. Deque D. none

Always Heap is a [ ]
A. complete Binary tree C. Full Binary tree
B. Binary Search Tree D. none
LECTURE NOTES ON

DESIGN AND ANALYSIS OF ALGORITHMS

MCA - 401
Basic Concepts

Algorithm:

An Algorithm is a finite sequence of instructions, each of which has a clear meaning


and can be performed with a finite amount of effort in a finite length of time. No
matter what the input values may be, an algorithm terminates after executing a finite
number of instructions. In addition every algorithm must satisfy the following
criteria:

 Input: there are zero or more quantities, which are externally supplied;

 Output: at least one quantity is produced;

 Definiteness: each instruction must be clear and unambiguous;

 Finiteness: if we trace out the instructions of an algorithm, then for all cases the
algorithm will terminate after a finite number of steps;

 Effectiveness: every instruction must be sufficiently basic that it can in principle be


carried out by a person using only pencil and paper. It is not enough that each
operation be definite, but it must also be feasible.

In formal computer science, one distinguishes between an algorithm, and a program.


A program does not necessarily satisfy the fourth condition. One important example
of such a program for a computer is its operating system, which never terminates
(except for system crashes) but continues in a wait loop until more jobs are entered.

We represent algorithm using a pseudo language that is a combination of the


constructs of a programming language together with informal English statements.

Performance of a program:

The performance of a program is the amount of computer memory and time needed
to run a program. We use two approaches to determine the performance of a
program. One is analytical, and the other experimental. In performance analysis we
use analytical methods, while in performance measurement we conduct experiments.

Time Complexity:

The time needed by an algorithm expressed as a function of the size of a problem is


called the time complexity of the algorithm. The time complexity of a program is the
amount of computer time it needs to run to completion.

The limiting behavior of the complexity as size increases is called the asymptotic time
complexity. It is the asymptotic complexity of an algorithm, which ultimately
determines the size of problems that can be solved by the algorithm.
Space Complexity:

The space complexity of a program is the amount of memory it needs to run to


completion. The space need by a program has the following components:

Instruction space: Instruction space is the space needed to store the compiled
version of the program instructions.

Data space: Data space is the space needed to store all constant and variable
values. Data space has two components:

 Space needed by constants and simple variables in program.


 Space needed by dynamically allocated objects such as arrays and class
instances.

Environment stack space: The environment stack is used to save information


needed to resume execution of partially completed functions.

Instruction Space: The amount of instructions space that is needed depends on


factors such as:

 The compiler used to complete the program into machine code.


 The compiler options in effect at the time of compilation
 The target computer.

Algorithm Design Goals:

The three basic design goals that one should strive for in a program are:

1. Try to save Time


2. Try to save Space
3. Try to save Face

A program that runs faster is a better program, so saving time is an obvious


goal. Like wise, a program that saves space over a competing program is considered
desirable. We want to “save face” by preventing the program from locking up or
generating reams of garbled data.

Classification of Algorithms

If ‘n’ is the number of data items to be processed or degree of polynomial or the size
of the file to be sorted or searched or the number of nodes in a graph etc.

1 Next instructions of most programs are executed once or at most only a


few times. If all the instructions of a program have this property, we say
that its running time is a constant.

Log n When the running time of a program is logarithmic, the program gets
slightly slower as n grows. This running time commonly occurs in
programs that solve a big problem by transforming it into a smaller
problem, cutting the size by some constant fraction. When n is a million,
log n is a doubled. Whenever n doubles, log n increases by a constant,
but log n does not double until n increases to n2.
n When the running time of a program is linear, it is generally the case that
a small amount of processing is done on each input element. This is the
optimal situation for an algorithm that must process n inputs.

n. log n This running time arises for algorithms that solve a problem by breaking
it up into smaller sub-problems, solving then independently, and then
combining the solutions. When n doubles, the running time more than
doubles.

n2 When the running time of an algorithm is quadratic, it is practical for use


only on relatively small problems. Quadratic running times typically arise
in algorithms that process all pairs of data items (perhaps in a double
nested loop) whenever n doubles, the running time increases four fold.

n3 Similarly, an algorithm that process triples of data items (perhaps in a


triple–nested loop) has a cubic running time and is practical for use only
on small problems. Whenever n doubles, the running time increases eight
fold.

2n Few algorithms with exponential running time are likely to be appropriate


for practical use, such algorithms arise naturally as “brute–force”
solutions to problems. Whenever n doubles, the running time squares.

Complexity of Algorithms:

The complexity of an algorithm M is the function f(n) which gives the running time
and/or storage space requirement of the algorithm in terms of the size ‘n’ of the
input data. Mostly, the storage space required by an algorithm is simply a multiple of
the data size ‘n’. Complexity shall refer to the running time of the algorithm.

The function f(n), gives the running time of an algorithm, depends not only on the
size ‘n’ of the input data but also on the particular data. The complexity function f(n)
for certain cases are:

1. Best Case : The minimum possible value of f(n) is called the best case.

2. Average Case : The expected value of f(n).

3. Worst Case : The maximum value of f(n) for any key possible input.

The field of computer science, which studies efficiency of algorithms, is known as


analysis of algorithms.

Algorithms can be evaluated by a variety of criteria. Most often we shall be interested


in the rate of growth of the time or space required to solve larger and larger
instances of a problem. We will associate with the problem an integer, called the size
of the problem, which is a measure of the quantity of input data.
Rate of Growth:

The following notations are commonly use notations in performance analysis and
used to characterize the complexity of an algorithm:

1. Big–OHOH (O) 1,
2. Big–OMEGA
OMEGA (),
3. Big–THETA
THETA ()
( and
4. Little–OH (o)

Big–OH
OH O (Upper Bound)

f(n) = O(g(n)), (pronounced order of or big oh), says that the growth rate of f(n) is
less than or equal (<)) that of g(n).

Big–OMEGA  (Lower Bound)

f(n) = (g(n)) (pronounced omega), says that the growth rate of f(n) is greater
than or equal to (>)) that of g(n).
Big–THETA  (Same order)
f(n) = (g(n)) (pronounced theta), says that the growth rate of f(n) equals (=) the
growth rate of g(n) [if f(n) = O(g(n)) and T(n) =  (g(n)].

Little–OH (o)

T(n) = o(p(n)) (pronounced little oh), says that the growth rate of T(n) is less than
the growth rate of p(n) [if T(n) = O(p(n)) and T(n)  (p(n))].

Analyzing Algorithms:

Suppose ‘M’ is an algorithm, and suppose ‘n’ is the size of the input data. Clearly the
complexity f(n) of M increases as n increases. It is usually the rate of increase of f(n)
we want to examine. This is usually done by comparing f(n) with some standard
functions. The most common computing times are:

O(1), O(log2 n), O(n), O(n. log2 n), O(n2), O(n3), O(2n), n! and nn

Numerical Comparison of Different Algorithms:


Algorithms

The execution time for six of the typical functions is given below:

n log2 n n*log2n n2 n3 2n
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65,536
32 5 160 1024 32,768 4,294,967,296
64 6 384 4096 2,62,144 Note 1
128 7 896 16,384 2,097,152 Note 2
256 8 2048 65,536 1,677,216 ????????

Note1: The value here is approximately the number of machine instructions


executed by a 1 gigaflop computer in 5000 years.
Note 2: The value here is about 500 billion times the age of the universe in
nanoseconds, assuming a universe age of 20 billion years.
Graph of log n, n, n log n, n2, n3, 2n, n! and nn

O(log n) does not depend on the base of the logarithm. To simplify the analysis, the
convention will not have any particular units of time. Thus we throw away leading
constants. We will also throw away low–order terms while computing a Big–Oh
running time. Since Big-Oh is an upper bound, the answer provided is a guarantee
that the program will terminate within a certain time period. The program may stop
earlier than this, but never later.

One way to compare the function f(n) with these standard function is to use the
functional ‘O’ notation, suppose f(n) and g(n) are functions defined on the positive
integers with the property that f(n) is bounded by some multiple g(n) for almost all
‘n’. Then,
f(n) = O(g(n))

Which is read as “f(n) is of order g(n)”. For example, the order of complexity for:

 Linear search is O (n)


 Binary search is O (log n)
 Bubble sort is O (n2)
 Merge sort is O (n log n)

The rule of sums:

Suppose that T1(n) and T2(n) are the running times of two programs fragments P1
and P2, and that T1(n) is O(f(n)) and T2(n) is O(g(n)). Then T1(n) + T2(n), the
running time of P1 followed by P2 is O(max f(n), g(n)), this is called as rule of sums.

For example, suppose that we have three steps whose running times are respectively
O(n2), O(n3) and O(n. log n). Then the running time of the first two steps executed
sequentially is O (max(n2, n3)) which is O(n3). The running time of all three
together is O(max (n3, n. log n)) which is O(n3).
The rule of products:

If T1(n) and T2(n) are O(f(n) and O(g(n)) respectively. Then T1(n)*T2(n) is O(f(n)
g(n)). It follows term the product rule that O(c f(n)) means the same thing as O(f(n))
if ‘c’ is any positive constant. For example, O(n2/2) is same as O(n2).
Suppose that we have five algorithms A1–A5 with the following time complexities:

A1 : n
A2 : n log n
A3 : n2
A4: n3
A5: 2n

The time complexity is the number of time units required to process an input of size
‘n’. Assuming that one unit of time equals one millisecond. The size of the problems
that can be solved by each of these five algorithms is:

Algorithm Time Maximum problem size


complexity 1 second 1 minute 1 hour
A1 n 1000 6 x 104 3.6 x 106
A2 n log n 140 4893 2.0 x 105
A3 n2
31 244 1897
A4 n3
10 39 153
A5 2n 9 15 21

The speed of computations has increased so much over last thirty years and it might
seem that efficiency in algorithm is no longer important. But, paradoxically, efficiency
matters more today then ever before. The reason why this is so is that our ambition
has grown with our computing power. Virtually all applications of computing
simulation of physical data are demanding more speed.

The faster the computer run, the more need are efficient algorithms to take
advantage of their power. As the computer becomes faster and we can handle larger
problems, it is the complexity of an algorithm that determines the increase in
problem size that can be achieved with an increase in computer speed.

Suppose the next generation of computers is ten times faster than the current
generation, from the table we can see the increase in size of the problem.

Time Maximum problem size Maximum problem size


Algorithm Complexity before speed up after speed up
A1 n S1 10 S1
A2 n log n S2 10 S2 for large S2
A3 n2
S3 3.16 S3
A4 n3
S4 2.15 S4
A5 2n S5 S5 + 3.3

Instead of an increase in speed consider the effect of using a more efficient


algorithm. By looking into the following table it is clear that if minute as a basis for
comparison, by replacing algorithm A4 with A3, we can solve a problem six times
larger; by replacing A4 with A2 we can solve a problem 125 times larger. These
results are for more impressive than the two fold improvement obtained by a ten fold
increase in speed. If an hour is used as the basis of comparison, the differences are
even more significant.
We therefore conclude that the asymptotic complexity of an algorithm is an
important measure of the goodness of an algorithm.

The Running time of a program:

When solving a problem we are faced with a choice among algorithms. The basis for
this can be any one of the following:

i. We would like an algorithm that is easy to understand, code and debug.

ii. We would like an algorithm that makes efficient use of the computer’s
resources, especially, one that runs as fast as possible.

Measuring the running time of a program:

The running time of a program depends on factors such as:

1. The input to the program.

2. The quality of code generated by the compiler used to create the object
program.

3. The nature and speed of the instructions on the machine used to execute the
program, and

4. The time complexity of the algorithm underlying the program.

The running time depends not on the exact input but only the size of the input. For
many programs, the running time is really a function of the particular input, and not
just of the input size. In that case we define T(n) to be the worst case running time,
i.e. the maximum overall input of size ‘n’, of the running time on that input. We also
consider Tavg(n) the average, over all input of size ‘n’ of the running time on that
input. In practice, the average running time is often much harder to determine than
the worst case running time. Thus, we will use worst–case running time as the
principal measure of time complexity.

Seeing the remarks (2) and (3) we cannot express the running time T(n) in standard
time units such as seconds. Rather we can only make remarks like the running time
of such and such algorithm is proportional to n2. The constant of proportionality will
remain un-specified, since it depends so heavily on the compiler, the machine and
other factors.

Asymptotic Analysis of Algorithms:

Our approach is based on the asymptotic complexity measure. This means that we
don’t try to count the exact number of steps of a program, but how that number
grows with the size of the input to the program. That gives us a measure that will
work for different operating systems, compilers and CPUs. The asymptotic complexity
is written using big-O notation.

Rules for using big-O:

The most important property is that big-O gives an upper bound only. If an algorithm
is O(n2), it doesn’t have to take n2 steps (or a constant multiple of n2). But it can’t
take more than n2. So any algorithm that is O(n), is also an O(n2) algorithm. If this
seems confusing, think of big-O as being like "<". Any number that is < n is also <
n2.
1. Ignoring constant factors: O(c f(n)) = O(f(n)), where c is a constant; e.g.
O(20 n3) = O(n3)

2. Ignoring smaller terms: If a<b then O(a+b) = O(b), for example O(n2+n)
= O(n2)

3. Upper bound only: If a<b then an O(a) algorithm is also an O(b)


algorithm. For example, an O(n) algorithm is also an O(n2) algorithm (but
not vice versa).

4. n and log n are "bigger" than any constant, from an asymptotic view (that
means for large enough n). So if k is a constant, an O(n + k) algorithm is
also O(n), by ignoring smaller terms. Similarly, an O(log n + k) algorithm
is also O(log n).

5. Another consequence of the last item is that an O(n log n + n) algorithm,


which is O(n(log n + 1)), can be simplified to O(n log n).

Calculating the running time of a program:

Let us now look into how big-O bounds can be computed for some common
algorithms.

Example 1:

Let’s consider a short piece of source code:

x = 3*y + 2;
z = z + 1;

If y, z are scalars, this piece of code takes a constant amount of time, which we write
as O(1). In terms of actual computer instructions or clock ticks, it’s difficult to say
exactly how long it takes. But whatever it is, it should be the same whenever this
piece of code is executed. O(1) means some constant, it might be 5, or 1 or 1000.

Example 2:

2n2 + 5n – 6 = O (2n) 2n2 + 5n – 6   (2n)


2n2 + 5n – 6 = O (n3) 2n2 + 5n – 6   (n3)
2n2 + 5n – 6 = O (n2) 2n2 + 5n – 6 =  (n2)
2n2 + 5n – 6  O (n) 2n2 + 5n – 6   (n)

2n2 + 5n – 6   (2n) 2n2 + 5n – 6 = o (2n)


2n2 + 5n – 6   (n3) 2n2 + 5n – 6 = o (n3)
2n2 + 5n – 6 =  (n2) 2n2 + 5n – 6  o (n2)
2n2 + 5n – 6 =  (n) 2n2 + 5n – 6  o (n)
Example 3:

If the first program takes 100n2 milliseconds and while the second takes 5n3
milliseconds, then might not 5n3 program better than 100n2 program?

As the programs can be evaluated by comparing their running time functions, with
constants by proportionality neglected. So, 5n3 program be better than the 100n2
program.

5 n3/100 n2 = n/20

for inputs n < 20, the program with running time 5n3 will be faster than those the
one with running time 100 n2. Therefore, if the program is to be run mainly on inputs
of small size, we would indeed prefer the program whose running time was O(n3)

However, as ‘n’ gets large, the ratio of the running times, which is n/20, gets
arbitrarily larger. Thus, as the size of the input increases, the O(n3) program will take
significantly more time than the O(n2) program. So it is always better to prefer a
program whose running time with the lower growth rate. The low growth rate
function’s such as O(n) or O(n log n) are always better.

Example 4:

Analysis of simple for loop

Now let’s consider a simple for loop:

for (i = 1; i<=n; i++)


v[i] = v[i] + 1;

This loop will run exactly n times, and because the inside of the loop takes constant
time, the total running time is proportional to n. We write it as O(n). The actual
number of instructions might be 50n, while the running time might be 17n
microseconds. It might even be 17n+3 microseconds because the loop needs some
time to start up. The big-O notation allows a multiplication factor (like 17) as well as
an additive factor (like 3). As long as it’s a linear function which is proportional to n,
the correct notation is O(n) and the code is said to have linear running time.

Example 5:

Analysis for nested for loop

Now let’s look at a more complicated example, a nested for loop:

for (i = 1; i<=n; i++)


for (j = 1; j<=n; j++)
a[i,j] = b[i,j] * x;

The outer for loop executes N times, while the inner loop executes n times for every
execution of the outer loop. That is, the inner loop executes n  n = n2 times. The
assignment statement in the inner loop takes constant time, so the running time of
the code is O(n2) steps. This piece of code is said to have quadratic running time.
Example 6:

Analysis of matrix multiply

Lets start with an easy case. Multiplying two n  n matrices. The code to compute the
matrix product C = A * B is given below.

for (i = 1; i<=n; i++)


for (j = 1; j<=n; j++)
C[i, j] = 0;
for (k = 1; k<=n; k++)
C[i, j] = C[i, j] + A[i, k] * B[k, j];

There are 3 nested for loops, each of which runs n times. The innermost loop
therefore executes n*n*n = n3 times. The innermost statement, which contains a
scalar sum and product takes constant O(1) time. So the algorithm overall takes
O(n3) time.

Example 7:

Analysis of bubble sort

The main body of the code for bubble sort looks something like this:

for (i = n-1; i<1; i--)


for (j = 1; j<=i; j++)
if (a[j] > a[j+1])
swap a[j] and a[j+1];

This looks like the double. The innermost statement, the if, takes O(1) time. It
doesn’t necessarily take the same time when the condition is true as it does when it
is false, but both times are bounded by a constant. But there is an important
difference here. The outer loop executes n times, but the inner loop executes a
number of times that depends on i. The first time the inner for executes, it runs i =
n-1 times. The second time it runs n-2 times, etc. The total number of times the
inner if statement executes is therefore:

(n-1) + (n-2) + ... + 3 + 2 + 1

This is the sum of an arithmetic series.

N 1 2
 (n  i)  n(n  i) n n
2  
i 1 2 2

The value of the sum is n(n-1)/2. So the running time of bubble sort is O(n(n-1)/2),
which is O((n2-n)/2). Using the rules for big-O given earlier, this bound simplifies to
O((n2)/2) by ignoring a smaller term, and to O(n2), by ignoring a constant factor.
Thus, bubble sort is an O(n2) algorithm.
Example 8:

Analysis of binary search

Binary search is a little harder to analyze because it doesn’t have a for loop. But it’s
still pretty easy because the search interval halves each time we iterate the search.
The sequence of search intervals looks something like this:

n, n/2, n/4, ..., 8, 4, 2, 1

It’s not obvious how long this sequence is, but if we take logs, it is:

log2 n, log2 n - 1, log2 n - 2, ..., 3, 2, 1, 0

Since the second sequence decrements by 1 each time down to 0, its length must be
log2 n + 1. It takes only constant time to do each test of binary search, so the total
running time is just the number of times that we iterate, which is log2n + 1. So
binary search is an O(log2 n) algorithm. Since the base of the log doesn’t matter in an
asymptotic bound, we can write that binary search is O(log n).

General rules for the analysis of programs:

In general the running time of a statement or group of statements may be


parameterized by the input size and/or by one or more variables. The only
permissible parameter for the running time of the whole program is ‘n’ the input size.

1. The running time of each assignment read and write statement can usually be
taken to be O(1). (There are few exemptions, such as in PL/1, where
assignments can involve arbitrarily larger arrays and in any language that
allows function calls in arraignment statements).

2. The running time of a sequence of statements is determined by the sum rule.


I.e. the running time of the sequence is, to with in a constant factor, the
largest running time of any statement in the sequence.

3. The running time of an if–statement is the cost of conditionally executed


statements, plus the time for evaluating the condition. The time to evaluate
the condition is normally O(1) the time for an if–then–else construct is the
time to evaluate the condition plus the larger of the time needed for the
statements executed when the condition is true and the time for the
statements executed when the condition is false.

4. The time to execute a loop is the sum, over all times around the loop, the
time to execute the body and the time to evaluate the condition for
termination (usually the latter is O(1)). Often this time is, neglected constant
factors, the product of the number of times around the loop and the largest
possible time for one execution of the body, but we must consider each loop
separately to make sure.

You might also like