0% found this document useful (0 votes)
9 views

Chapter 1-3

The document introduces data structures and algorithms, explaining that a program consists of organized data and a sequence of steps to solve a problem. It categorizes data structures into primitive and non-primitive types, further dividing non-primitive structures into linear and non-linear categories, such as arrays, stacks, queues, trees, and graphs. Additionally, it discusses algorithms, their efficiency, and the importance of analyzing their time and space complexity.

Uploaded by

Getaneh Awoke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Chapter 1-3

The document introduces data structures and algorithms, explaining that a program consists of organized data and a sequence of steps to solve a problem. It categorizes data structures into primitive and non-primitive types, further dividing non-primitive structures into linear and non-linear categories, such as arrays, stacks, queues, trees, and graphs. Additionally, it discusses algorithms, their efficiency, and the importance of analyzing their time and space complexity.

Uploaded by

Getaneh Awoke
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

CHAPTER ONE

INTRODUCTION TO DATA
STRUCTURES AND
ALGORITHMS
INTRODUCTION
 A program is written code in order to solve a
problem.
 A solution to a problem actually consists of two
things:
 A way to organize the data
 Sequence of steps to solve the problem

 The way data are organized in a computer’s


memory is said to be Data Structure and
 The sequence of computational steps to solve a
problem is said to be an algorithm.
 Therefore, a program is nothing but data structures
plus algorithms. 2
What are Data Structures?
 Data structure name indicates organizing the data in memory.
 It is a way of arranging data on a computer so that it can be
accessed and updated efficiently.
 For example, if you want to store data sequentially in the
memory, then you can go for the Array data structure.

Array data Structure Representation


 Data structure is not any programming language like C,
C++, java, etc.
 It is a set of algorithms that we can use in any
programming language to structure the data in the
memory.
CON’T…
 Data structure is particular way of storing and
organizing data in the memory of the computer so
that these data can easily be retrieved and
efficiently utilized.
 Note: Data structure and data types are slightly different.

 Data structure is the collection of data types arranged in a


specific order.
 We can classify Data Structures into two categories:
 Primitive Data Structure
 can be manipulated or operated directly by
machine-level instructions.
 Non-Primitive Data Structure
 derived from Primitive Data Structures and 4

cannot be manipulate by machine level.


5
Types of Non-Primitive Data Structure
 Basically, data structures are divided into two categories:
 Linear data structure
 Non-linear data structure
Linear data structures
 In linear data structures, the elements are arranged in sequence
 Since elements are arranged in particular order, they are easy to implement.
 However, when the complexity of the program increases, the linear data
structures might not be the best choice because of operational complexities.
Popular linear data structures are:
i. Array Data Structure
 In an array, elements in memory are arranged in continuous memory.
 All the elements of an array are of the same type.
 And, the type of elements that can be stored in the form of arrays is
determined by the programming language.

An array with each element represented by an index


ii. Stack Data Structure
 In stack data structure, elements are works in the LIFO principle. That is, the last
element stored in a stack will be removed first.
 It works just like a pile of plates where the last plate kept on the pile will be
removed first.

 In a stack, operations can be performing only from one end (top here).
iii. Queue Data Structure
 Unlike stack, the queue data structure works in the FIFO principle where first
element stored in the queue will be removed first.
 It works just like a queue of people in the ticket counter where first person on the
queue will get the ticket first.
 In a queue, addition and removal are performed from separate ends.

iv. Linked List Data Structure


 In linked list data structure, data elements are connected through a series of nodes.
And, each node contains the data items and address to the next node.

Linked list
Nonlinear data structures
 Unlike linear data structures, elements in non-linear data structures are
not in any sequence.
 Instead they are arranged in a hierarchical manner where one element
will be connected to one or more elements.
 Non-linear data structures are further divided into graph and tree based
data structures.
1. Trees Data Structure
 a tree is a collection of vertices and edges. However, in tree data
structure, there can only be one edge between two vertices.
 It contains a central node, structural nodes, and sub-
nodes connected via edges.
also consists of roots, branches, and leaves connected.
 It
 Popular Tree based Data Structure
 Binary Tree
 Binary Search Tree
 AVL Tree
 B-Tree Tree data Tree structure example
 B+ Tree
 Red-Black Tree
2. GRAPH DATA STRUCTURE
 It utilized to address problems of the real world in
which it denotes the problem area as a network such
as social networks, circuit networks, and telephone
networks
 In graph data structure, each node is called vertex and each
vertex is connected to other vertices through edges.

 Based on position of the vertices and edges, the


Graphs can be :
 Null Graph:

 Trivial Graph

 Directed Graph

 Complete Graph
9
Linear Vs Non-linear Data Structures
 Now that we know about linear and non-linear data structures, let's see the major
differences between them.
Linear Data Structures Non Linear Data Structures

The data items are arranged in sequential The data items are arranged in non-
order, one after the other. sequential order (hierarchical manner).

All the items are present on the single layer. The data items are present at different layers.

It can be traversed on a single run. That is, if It requires multiple runs. That is, if we start
we start from the first element, we can from the first element it might not be
traverse all the elements sequentially in a possible to traverse all the elements in a
single pass. single pass.

Different structures utilize memory in


The memory utilization is not efficient. different efficient ways depending on the
need.

The time complexity increase with the data


Time complexity remains the same.
size.

Example: Arrays, Stack, Queue Example: Tree, Graph, Map


ALGORITHM
 An algorithm is a well-defined computational procedure that
takes some value or a set of values as input and produces some
value or a set of values as output.
 is a process or a set of rules required to perform
calculations or some other problem-solving operations
especially by a computer.
 Algorithms are the dynamic part of a program’s world model.

 An algorithm transforms data structures from one state


to another state in two ways:
 An algorithm may change the value held by a data
structure
 An algorithm may change the data structure itself
 it contains the finite set of instructions which are being
11
carried in a specific order to perform the specific task.
GOOD ALGORITHMS?
 Run in less time
 Consume less memory

But computational resources (time complexity) is usually more


important
Measuring Algorithms Efficiency
 The efficiency of an algorithm is a measure of the amount of
resources consumed in solving a problem of size n.
 The resource we are most interested in is time
 We can use the same techniques to analyze the consumption of other
resources, such as memory space.
 It would seem that the most obvious way to measure the
efficiency of an algorithm is to run it and measure how much
processor time is needed
12
FACTORS
 Hardware
 Operating System
 Compiler
 Size of input
 Nature of Input
 Algorithm
Which should be improved?
RUNNING TIME OF AN ALGORITHM
 Depends upon
 Input Size
 Nature of Input
 Generally time grows with size of input, so running time of
an algorithm is usually measured as function of input size.
 Running time is measured in terms of number of
steps/primitive operations performed
13
 Independent from machine, OS
Measuring complexity
 Algorithm analysis refers to the process of determining how much
computing time and storage that algorithms will require.
 In other words, it’s a process of predicting the resource
requirement of algorithms in a given environment.
 In order to solve a problem, there are many possible algorithms.
One has to be able to choose the best algorithm for the problem at
hand using some scientific method.
 To classify some data structures and algorithms as good, we need
precise ways of analyzing them in terms of resource requirement.
 The main resources are:
 Running Time
 Memory Usage

 Running time is usually treated as the most important since


computational time is the most precious resource in most
problem domains.
 There are two approaches to measure the efficiency of algorithms: 14
 Empirical
 Theoretical
1. Empirical Algorithm Analysis
 It works based on the total running time of the program. It
uses actual system clock time.
Example:
t1(Initial time before the program starts)
for(int i=0; i<=10; i++)
cout<<i;
t2 (final time after the execution of the program is finished)
Running time taken by the above algorithm (TotalTime) = t2-t1;
 It is difficult to determine efficiency of algorithms using this
approach, because clock-time can vary based on many
factors. For example:
a) Processor speed of the computer
b) Current processor load
c) Specific data for a particular run of the program
d) Operating System
 Multitasking Vs Single tasking
15
 Internal structure
2. Theoretical Algorithm Analysis
 Determining the quantity of resources required using
mathematical concept.
 Analyze an algorithm according to the number
of basic operations (time units) required, rather
than according to an absolute amount of time involved.
 We use theoretical approach to determine the
efficiency of algorithm because:
 The number of operation will not vary under
different conditions.
 It helps us to have a meaningful measure that
permits comparison of algorithms independent of
operating platform.
 It helps to determine the complexity of algorithm
16
Complexity Analysis
 Complexity Analysis is the systematic study of the cost of
computation, measured either in time units or in
operations performed, or in the amount of storage space
required.
 Two important ways to characterize the effectiveness of an
algorithm are its Space Complexity and Time Complexity.
 Time Complexity: Determine the approximate number of
operations required to solve a problem of size n. The limiting
behavior of time complexity as size increases is called
the Asymptotic Time Complexity.
 Space Complexity: Determine the approximate
memory required to solve a problem of size n. The
limiting behavior of space complexity as size
increases is called the Asymptotic Space Complexity.
o Asymptotic Complexity of an algorithm determines the
size of problems that can be solved by the algorithm.
17
CON’T…
 the time required by an algorithm comes under
three types:
 Worst case: It defines the input for which the
algorithm takes a huge time.
 Average case: It takes average time for the program
execution.
 Best case: It defines the input for which the
algorithm takes the lowest time

18
o There is no generally accepted set of rules for algorithm
analysis. However, an exact count of operations is
commonly used.
Analysis Rules:
1. We assume an arbitrary time unit.
2. Execution of one of the following operations takes time 1:
 Assignment Operation
 Single Input/Output Operation
 Single Boolean Operations
 Single Arithmetic Operations
 Function Return
3. Running time of a selection statement (if, switch) is the
time for the condition evaluation + the maximum of the
running times for the individual clauses in the
selection. 19
Example:
int x;
int sum=0;
if(a>b)
{
sum= a+b;
cout<<sum;
}
else
{
cout<<b;
}
T(n) = 1 +1+max(3,1) = 5
20
4. Loops: Running time for a loop is equal to the
running time for the statements inside the loop *
number of iterations.
o The total running time of a statement inside a group of
nested loops is the running time of the statements
multiplied by the product of the sizes of all the loops.
 For nested loops, analyze inside out.
 Always assume that the loop executes the
maximum number of iterations possible.
5. Running time of a function call is 1 for
setup + the time for any parameter
calculations + the time required for the
execution of the function body.
21
Examples:
1. int count(){
int k=0;
cout<< “Enter an integer”;
cin>>n;
for (i=0;i<n;i++)
k=k+1;
return 0;
}
Time Units to Compute
-------------------------------------------------
1 for the assignment statement: int k=0
1 for the output statement.
1 for the input statement.
In the for loop:
1 assignment, n+1 tests, and n increments.
n loops of 2 units for an assignment, and an addition. 22

1 for the return statement.


T (n)= 1+1+1+(1+n+1+n)+2n+1 = 4n+6 = O(n)
2. int total(int n)
{
int sum=0;
for (int i=1;i<=n;i++)
sum=sum+1;
return sum;
}
Time Units to Compute
-------------------------------------------------
1 for the assignment statement: int sum=0
In the for loop:
1 assignment, n+1 tests, and n increments.
n loops of 2 units for an assignment, and an addition.
1 for the return statement.
-------------------------------------------------------------------
23
T (n)= 1+ (1+n+1+n)+2n+1 = 4n+4 = O(n)
3. void func()
Time Units to Compute
---------------------------------------------
{
1 for the first assignment statement:
int x=0; x=0;
int i=0; 1 for the second assignment statement:
int j=1; i=0;
cout<< “Enter an Integer value”; 1 for the third assignment statement:
cin>>n; j=1;
while (i<n){ 1 for the output statement.
1 for the input statement.
x++;
In the first while loop:
i++; n+1 tests
} n loops of 2 units for the two
while (j<n) increment (addition) operations
{ In the second while loop:
j++; n tests
} n-1 increments
T (n)= 1+1+1+1+1+n+1+2n+n+n-1 24
}
= 5n+5 = O(n)
4. int sum (int n)
{
int partial_sum = 0;
for (int i = 1; i <= n; i++)
partial_sum = partial_sum +(i * i * i);
return partial_sum;
}
Time Units to Compute
-------------------------------------------------
1 for the assignment.
1 assignment, n+1 tests, and n increments.
n loops of 4 units for an assignment, an addition,
and two multiplications.
1 for the return statement. 25

T (n)= 1+(1+n+1+n)+4n+1 = 6n+4 = O(n)


Formal Approach to Analysis
 In the above examples we have seen that analyzing
Loop statements is so complex. However, it can be
simplified by using some formal approach in which case
we can ignore initializations, loop control, and updates.
For Loops: Formally
 In general, a for loop translates to a summation.
The index and bounds of the summation are the same
as the index and bounds of the for loop.

 Suppose we count the number of additions that are


done. There is 1 addition per iteration of the loop,
26

hence N additions in total.


Consecutive Statements: Formally
 Add the running times of the separate blocks of your code.

 Nested Loops: Formally, nested for loops translate


into multiple summations, one for each For loop.

o Consecutive Statements: Formally, add the running times of


the separate blocks of your code.

27
Conditionals: Formally
 If (test) s1 else s2: Compute the maximum of the running
time for s1 and s2.

28
ASYMPTOTIC NOTATIONS
 The commonly used asymptotic notations used for
calculating the running time complexity of an algorithm
is given below:
 Big oh Notation (O)
 It measures the worst case of time complexity.
 Omega Notation (Ω)
 It basically describes the best-case scenario
 It determines the fastest time of an algorithm.

 Theta Notation (θ)


 It describes the average case and realistic time.
 It used when the value of worst-case and the best-

case is same.
29
 Little O (o())
 It describes upper bound isn't asymptotically tight.
Big O, Little O, Omega & Theta
Big O: “f(n) is O(g(n))” iff for some constants c and N₀, f(N) ≤
cg(N) for all N > N₀
Omega: “f(n) is Ω(g(n))” iff for some constants c and N₀, f(N) ≥
cg(N) for all N > N₀
Theta: “f(n) is Θ(g(n))” iff f(n) is O(g(n)) and f(n) is Ω(g(n))
Little O: “f(n) is o(g(n))” iff f(n) is O(g(n)) and f(n) is not Θ(g(n))
Formal Definition of Big O, Omega, Theta and Little O
In plain words:
Big O (O()) describes the upper bound of the complexity.
Omega (Ω()) describes the lower bound of the complexity.

Theta (Θ()) describes the exact bound of the complexity.

Little O (o()) describes the upper bound excluding the exact


30
bound.
31

Relationships between Big O, Little O, Omega & Theta Illustrated


CHAPTER TWO
SIMPLE SEARCHING
AND
SORTING ALGORITHMS
32
Why do we study sorting and searching algorithms?
 These algorithms are the most common and
useful tasks operated by computer system.
 Computers spend a lot of time for searching and
sorting.
2.1. Simple Searching algorithms
 Searching:- is a process of finding an element
in a list of items or determining that the item is
not in the list.
 A search method looks for a key, arrives by
parameter. By convention, the method will
return the index of the element corresponding to
the key or, if unsuccessful, the value -1.
33
 There are two simple searching algorithms:
 Sequential, and Binary Search
2.1.1. Sequential Searching (Linear)
 Sequential search looks at elements, one at a time, from the first in
the list until a match for the target is found.
 The most natural way of searching an item. Easy to understand and
implement.
Algorithm:
 In a linear search, we start with top (beginning) of the list, and
compare the element at top with the key.
 If we have a match, the search terminates and the index number is
returned. If not, we go on the next element in the list.
 If we reach the end of the list without finding a match, we return -1.
Pseudo code
 Loop through the array starting at the first element until the value
of target matches one of the array elements.
 If a match is not found, return –1.
 Time is proportional to the size of input (n) and
34
 We call this time complexity O(n)
35
36
37
2.1.2. Binary Searching
 This searching algorithm works only for an ordered list of
elements and also uses divide and conquer strategy
(approach).
Algorithm:
I. In a binary search, we look for the key in the middle of the
list. If we get a match, the search is over.
II. If the key is greater than the element in the middle of the
list, we make the top (upper) half the list to search.
III. If the key is smaller, we make the bottom (lower) half the
list to search.
IV. Repeat the above steps (I,II and III) until one element
remains.
 If this element matches return the index of the element,

else return -1 index. (-1 shows that the key is not in the list).
 The computational time for this algorithm is proportional to
38

log2 n Therefore the time complexity is O(log n)


39
SPACE COMPLEXITY O(1) 40
2.2. Sorting Algorithms
 Sorting is one of the most important operations
performed by computers.
 Sorting is a process of reordering a list of items in either
increasing or decreasing order.
 Sorting data (i.e., placing the data into some particular
order, such as ascending or descending) is one of the
most important computing applications.
 It is important due mainly to the fact that searching a
list is much faster if the list is sorted.
 The following are simple sorting algorithms used to sort
small-sized lists.
 Insertion Sort
 Selection Sort
 Bubble Sort 41
2.2.1. Insertion Sort
 The insertion sort works just like its name suggests - it
inserts each item into its proper place in the final list.
 The simplest implementation of this requires two list
structures - the source list and the list into which sorted
items are inserted.
 To save memory, most implementations use an in-place
sort that works by moving the current item past the
already sorted items and repeatedly swapping it with
the preceding item until it is in place.
 It's the most instinctive type of sorting algorithm.

 The approach is the same approach that you use for


sorting a set of cards in your hand.
 While playing cards, you pick up a card, start at the
beginning of your hand and find the place to insert the 42
new card, insert it and move all the others up one place.
Basic Idea:
 Find the location for an element and move all others up, and
insert the element. The process involved in insertion sort is
as follows:
1. The left most value can be said to be sorted relative to
itself. Thus, we don’t need to do anything.
2. Check to see if the second value is smaller than the first
one. If it is, swap these two values. The first two values
are now relatively sorted.
3. Next, we need to insert the third value in to the
relatively sorted portion so that after insertion, the
portion will still be relatively sorted.
4. Remove the third value first. Slide the second value to
make room for insertion. Insert the value in the
appropriate position.
5. Now the first three are relatively sorted.
43
6. Do the same for the remaining items in the list.
44
45
46
47
2.2.2. Selection Sort
Basic Idea:
Loop through the array from i=0 to n-1.
Select the smallest element in the array from i to n
Swap this value with value at position i.

48
49
50
2.2.3. Bubble Sort
Bubble sort is the simplest algorithm to implement and the slowest
algorithm on very large inputs.
Basic Idea: Loop through array from i=0 to n and swap adjacent
elements if they are out of order.

51
52
53
54
CHAPTER THREE
LINKED LISTS
55
3.1. Definition
 A linked list is a linear data structure, in which the elements
are not stored at contiguous memory locations.
 The elements in a linked list are linked using pointers as
shown in the below image:

 In simple words, a linked list consists of nodes where each


node contains a data field and a reference(link) to the next
node in the list.
 Linked lists are the most basic self-referential structures.
 A linked list is made up of a chain of nodes. Each node
contains:
 the data item, and
 a pointer to the next node 56
3.2. Array Vs Linked lists
 Arrays are simple and fast but we must specify
their size at construction time.
 This has its own drawbacks.
 If you construct an array with space for n, tomorrow
you may need n+1.
 Here comes a need for a more flexible system?
 Like arrays, Linked List is a linear data structure.

 Unlike arrays, linked list elements are not stored at a


contiguous location; the elements are linked using
pointers.
Why Linked List?
 Arrays can be used to store linear data of similar types,
but arrays have the following limitations. 57
1. The size of the arrays is fixed: So we must know the
upper limit on the number of elements in advance. Also,
generally, the allocated memory is equal to the upper
limit irrespective of the usage.
2. Inserting a new element in an array of elements is
expensive because the room has to be created for the
new elements and to create room existing elements
have to be shifted.
 For example, in a system, if we maintain a sorted list of IDs in
an array id[].
id[] = [1000, 1010, 1050, 2000, 2040].
 And if we want to insert a new ID 1005, then to maintain the
sorted order, we have to move all the elements after 1000
(excluding 1000). Deletion is also expensive with arrays until
unless some special techniques are used. For example, to
delete 1010 in id[], everything after 1010 has to be moved.
58
Advantages of Linked Lists
 Flexible space use by dynamically allocating space for each
element as needed. This implies that one need not know the
size of the list in advance. Memory is efficiently utilized. i.e.
 They are dynamic – so length can increase or decrease as
necessary.
 Each node does not necessarily follow the previous one in
memory.
 Insertion and deletion is cheap

 Only need to change a few nodes (at most)


Is there a negative aspect of linked lists?
 We do not know the address of any individual node

 So we have to traverse the list to find it, which may take a


large # of operations.

59
Drawbacks:
1. Random access is not allowed. We have to access elements
sequentially starting from the first node. So we cannot do
binary search with linked lists efficiently with its default
implementation.
2. Extra memory space for a pointer is required with each
element of the list.
3. Not cache friendly. Since array elements are contiguous
locations, there is locality of reference which is not there in
case of linked lists.
Representation:
 A linked list is represented by a pointer to the first node of the
linked list.
 The first node is called the head. If the linked list is empty,
then the value of the head is NULL.
 There are four types of linked lists: singly-linked list, doubly-
60
linked list, circular linked list and doubly circular linked list.
 The singly-linked list contains nodes that only point to
the next node.

 The C++ doubly linked list has nodes that can point
towards both the next and the previous node.

 Circular Linked List

 Doubly circular linked list

61
3.3. Singly Linked Lists
 A node has two parts:
 the data part and
 the next part.
 The data part contains the stored data, and
 the next part provides the address of the next node.
 The first node of a linked list is called the head, and the
last node is called the tail.
 The list starts traversing from the head, while the tail
ends the list by pointing at NULL.

62
1. Creating Linked Lists in C++
 A linked list is a data structure that is built from
structures and pointers.
 It forms a chain of "nodes" with pointers representing
the links of the chain and holding the entire thing
together.
 A linked list can be represented by a diagram like this
one:

This linked list has four nodes in it, each with a link to the next node in the series. The last
node has a link to the special value NULL, which any pointer (whatever its type) can point to,
to show that it is the last link in the chain. There is also another special pointer, called Start63
(also called head), which points to the first link in the chain so that we can keep track of it.
Linked list example add_node(2);
#include<iostream.h> display();
int add_node(int n); add_node(7);
int display(); display();
struct node }
{ int add_node(int n) {
int data; node *temp=new node;
node *next; node *temp2;
}; temp->data=n;
node *head=NULL; temp->next=NULL;
int main( ) if(head==NULL)
{ {
display(); head=temp;
add_node(1); head->next=NULL; 64

display(); }
else cout<<temp->data<<endl;
{temp2=head; temp=temp->next;
while(temp2->next!=NULL) }
temp2=temp2->next; }
temp2->next=temp;
}
}
int display()
{
node *temp;
temp=head;
while(temp!=NULL)
{
65
ADDING A NODE TO THE FRONT
int insert_front(int x)
{
node *temp=new node;
temp->data=x;
temp->next=NULL;
if(head==NULL)
head=temp;
else{
temp->next=head;
head=temp;
}
66
}
DELETING FROM THE FRONT
int delet_front()
{
node *temp;
if(head==NULL)
cout<<“No data inside\n”;
else{
temp=head;
head=head->next;
delete temp;
} 67

}
DELETING ANY SPECIFIC NODE
int delet_any(int x)
{ node *temp, *temp3;
if(head==NULL)
cout<<“No data inside\n”;
elseif(head->data==x)
{ temp=head;
head=head->next;
delete temp; }
else {
temp=head;
while(temp->data!=x)
{ temp3=temp;
temp=temp->next; }
temp3->next=temp->next;
delete temp;
68
}
}
DELETING A NODE
int delet_end()
{ node *temp, *temp3;
if(head==NULL)
cout<<“No data inside\n”;
else{
temp=head;
while(temp->next!=NULL)
{ temp3=temp;
temp=temp->next;
}
temp3->next=NULL;
delete temp; 69
}
}
3.4. Doubly Linked Lists
 A doubly linked list is one where there are links from each
node in both directions:

 Each node in the list has two pointers, one to the next node
and one to the previous one - again, the ends of the list are
defined by NULL pointers.
 There is no pointer to the start of the list. Instead, there is
simply a pointer to some position in the list that can be
moved left or right.
70
 The reason we needed a start pointer in the ordinary linked list is
because, having moved on from one node to another, we can't
easily move back, so without the start pointer, we would lose
track of all the nodes in the list that we have already passed.
 With the doubly linked list, we can move the current pointer
backwards and forwards at will.
3.4.1. Creating Doubly Linked Lists
 The nodes for a doubly linked list would be defined as follows:

struct node
{ char name[20];
node *nxt; // Pointer to next node
node *prv; // Pointer to previous node
};
node *current;
current = new node;
current->name = "Fred"; 71

current->nxt = NULL;
current->prv = NULL;
 We have also included some code to declare the first
node and set its pointers to NULL. It gives the following
situation:

 We still need to consider the directions 'forward' and


'backward', so in this case, we will need to define
functions to add a node to the start of the list (left-most
position) and the end of the list (right-most position).

72
3.3.2. Adding a Node to a Doubly Linked List
void add_node_at_start (string new_name)
{ // Declare a temporary pointer and move it to the start
node *temp = current;
while (temp->prv != NULL)
temp = temp->prv; // Declare a new node and link it in
node *temp2;
temp2 = new node;
temp2->name = new_name; // Store the new name in the node
temp2->prv = NULL; // This is the new start of the list
temp2->nxt = temp; // Links to current list
temp->prv = temp2;
current=temp2;
} 73
void add_node_at_end ()
{ // Declare a temporary pointer and move it to the end
node *temp = current;
while (temp->nxt != NULL)
temp = temp->nxt; // Declare a new node and link it in
node *temp2;
temp2 = new node;
temp2->name = new_name; // Store the new name in the node
temp2->nxt = NULL; // This is the new start of the list
temp2->prv = temp; // Links to current list
temp->nxt = temp2;
}
 Here, the new name is passed to the appropriate
function as a parameter. 74
 We'll go through the function for adding a node to the
right-most end of the list.
 The method is similar for adding a node at the other end.
 Firstly, a temporary pointer is set up and is made to march
along the list until it points to last node in the list.

Start_Ptr

 After that, a new node is declared, and the name is copied into
it. The nxt pointer of this new node is set to NULL to indicate
that this node will be the new end of the list.
 The prv pointer of the new node is linked into the last node of
the existing list.
 The nxt pointer of the current end of the list is set to the new
node.
75
Deleting a node from the end of a doubly
linked list
void delete_end()
{
node *temp;
if(tail==NULL)
cout <<"No data inside\n";
else
{
temp = tail;
tail = tail->prev;
tail->next = NULL;
delete temp;
76
}
}
Deleting a node from the front of a doubly
linked list
void delete_front()
{
node *temp;
if(head==NULL)
cout <<"No data inside\n";
else
{
temp = head;
head = head->next;
head->prev = NULL;
delete temp; 77
}
}
Examples of Doubly Linked List
#include<iostream.h> if(current==NULL)
struct Dlink {
{ current = temp;
int data;
Dlink *next; t=current;
Dlink *prev; }
}; else
Dlink *current = NULL,*t; {
void creatDlink(int d) t->next = temp;
{
Dlink *temp; temp->prev = t;
temp = new Dlink; t=temp;
temp-> data = d; }
temp -> next= NULL; }
temp->prev=NULL;
void DDeleteFront() if(current == NULL)
{ cout<<"Empty list.\n";
Dlink *temp; else
if(current == NULL) {
cout<<"Empty list.\n";
temp1 = current;
else
{ while(temp1->next !=
NULL)
temp = current;
current = current->next; {
current->prev = NULL; temp2 = temp1;
delete temp; temp1 = temp1->next;
}} }
void DDeleteEnd() temp2->next = NULL;
{ delete temp1;
Dlink *temp1,*temp2; }}
void DinsertEnd(int item)
{
void Ddisplay()
Dlink *temp1,*temp2; {
temp1=current;
Dlink *t;
temp2=new Dlink;
temp2->data=item; if(current == NULL)
temp2->next=NULL; cout<<"Empty list.\n";
while(temp1->next!=NULL)
{temp1=temp1->next;} else
temp1->next=temp2; {
temp2->prev=temp1;
}
t = current;
void DinsertFront(int item) while(t != NULL)
{
Dlink *temp;
{
temp=new Dlink; cout<<t-> data<<" ";
temp->data=item;
t = t->next;
temp->prev=NULL;
temp->next=current; }}
current->prev=temp; cout<<endl;
current=temp;
} }
int main()
cout<<"\nEnter an element to INSERT at
{ the END: ";
int m, value; cin>>value;
cout<<"How many elements in the cout<<"\nAfter inserting "<<value<<" at
doubly linked list? "; the end, it becomes: \n\t";
cin>>m; DinsertEnd(value);
for(int i=1;i<=m ; i ++) Ddisplay();
{ cout<<"\nEnter an element to INSERT at
cout<<"\tEnter element "<<i<<" :"; FRONT: ";
cin>>value; cin>>value;
creatDlink(value); cout<<"\nAfter inserting "<<value<<" at
} front, it becomes: \n\t";
cout<<"\nThe doubly linked list DinsertFront(value);
elements are: \n\t"; Ddisplay();
Ddisplay(); cout<<"\nAfter the END element is
cout<<"\nWhen FRONT element is DELETED, it becomes: \n\t";
DELETED, it becomes:\n\t"; DDeleteEnd();
DDeleteFront(); Ddisplay();
Ddisplay(); }

You might also like