0% found this document useful (0 votes)
16 views

unit4ds

data structure

Uploaded by

samikshapatil775
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

unit4ds

data structure

Uploaded by

samikshapatil775
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Unit 4: Linked Lists

Syllabus Contents: Introduction to Linked List Terminologies, operations on


linked list, Types of Linked List: Linear list, doubly and circular linked lists.
Operations on Linked List, Linked representation of stack and Queue, Dynamic
storage management, Memory efficient doubly linked list, unrolled Linked List,
Skip List

Introduction to Linked List Terminologies


• Linked List is a linear data structure, in which elements are not stored at a
contiguous location, rather they are linked using pointers.
• Linked List forms a series of connected nodes, where each node stores
the data and the address of the next node.

Node Structure: A node in a linked list typically consists of two components:


1. Data: It holds the actual value or data associated with the node.
2. Next Pointer or Reference : It stores the memory address (reference)
of the next node in the sequence.
Head and Tail: The linked list is accessed through the head node, which
points to the first node in the list. The last node in the list points to NULL or
null ptr, indicating the end of the list. This node is known as the tail node.

Types of Linked List:


There are mainly three types of linked lists:
1. Single-linked list
2. Double linked list
3. Circular linked list

Linear list
In a singly linked list, each node contains a reference to the next node in the
sequence. Traversing a singly linked list is done in a forward direction.
Single-linked list

To represent a node of a singly linked list in C, we will use a structure that will
have two data members data and next where:
struct Node {
int data;
Node* next;
};
where,
• data: indicates the value stored in the node.
• next: is a pointer that will store the address of the next node in the
sequence.
The following diagram represents the structure of a singly linked list:

doubly and
• In a doubly linked list, each node contains references to both the next and
previous nodes.
• This allows for traversal in both forward and backward directions, but it
requires additional memory for the backward reference.
Double-linked list

circular linked lists.


• In a circular linked list, the last node points back to the head node,
creating a circular structure. It can be either singly or doubly linked.

Circular linked list

Operations on Linked List


1. Insertion: Adding a new node to a linked list involves adjusting the
pointers of the existing nodes to maintain the proper sequence. Insertion
can be performed at the beginning, end, or any position within the list
2. Deletion: Removing a node from a linked list requires adjusting the
pointers of the neighbouring nodes to bridge the gap left by the deleted
node. Deletion can be performed at the beginning, end, or any position
within the list.
3. Searching: Searching for a specific value in a linked list involves
traversing the list from the head node until the value is found or the end
of the list is reached.

Linked representation of stack and Queue


Dynamic storage management
• Dynamic memory allocation is a fundamental concept in data structures
and programming. It allows programs to allocate memory at runtime,
providing flexibility and efficiency when working with data structures of
varying sizes.
• In most programming languages, including C++, memory can be
classified into two categories: stack memory and heap memory.
• Local variables and function calls are stored in the stack memory,
whereas the more adaptable heap memory can be allocated and released
at runtime.
• The process of allocating and releasing memory from the heap is known
as dynamic memory allocation. It allows the programmer to manage
memory explicitly, providing the ability to create data structures of
varying sizes and adjust memory requirements dynamically.

1. malloc:
• The malloc (memory allocation) function is used to allocate a specified
number of bytes in memory. When memory is allocated successfully, it
returns a pointer to that block, otherwise it returns NULL.
• By dividing the necessary number of elements by each one's individual
size, the block's size is calculated. For example:
int* ptr = (int*) malloc(5 * sizeof(int)); // Allocates memory for 5 intege
rs
3. calloc:

• “calloc” or “contiguous allocation” method in C is used to dynamically


allocate the specified number of blocks of memory of the specified type.
• it is very much similar to malloc() but has two different points and these
are:
• It initializes each block with a default value ‘0’.
• It has two parameters or arguments as compare to malloc().
Syntax of calloc() in C
• ptr = (cast-type*)calloc(n, element-size);
here, n is the no. of elements and element-size is the size of each element.
For Example:
• ptr = (float*) calloc(25, sizeof(float));
This statement allocates contiguous space in memory for 25
elements each with the size of the float.


3. realloc:
• “realloc” or “re-allocation” method in C is used to dynamically change
the memory allocation of a previously allocated memory.
• In other words, if the memory previously allocated with the help of
malloc or calloc is insufficient, realloc can be used to dynamically re-
allocate memory.
• re-allocation of memory maintains the already present value and new
blocks will be initialized with the default garbage value.
• If space is insufficient, allocation fails and returns a NULL pointer.
Syntax of realloc() in C
ptr = realloc(ptr, newSize);
where ptr is reallocated with new size 'newSize'.

C free() method
• “free” method in C is used to dynamically de-allocate the memory.
• The memory allocated using functions malloc() and calloc() is not de-
allocated on their own. Hence the free() method is used, whenever the
dynamic memory allocation takes place.
• It helps to reduce wastage of memory by freeing it.
Syntax of free() in C
free(ptr);
Memory efficient doubly linked list

We know that each node in a doubly-linked list has two pointer fields which
contain the addresses of the previous and next node. On the other hand, each
node of the XOR linked list requires only a single pointer field, which doesn’t
store the actual memory addresses but stores the bitwise XOR of addresses for
its previous and next node.

Following are the Ordinary and XOR (or Memory Efficient) representations of
the Doubly Linked List:

XOR Linked List Representation.


In this section, we will discuss both ways in order to demonstrate how XOR
representation of doubly linked list differs from ordinary representation of
doubly linked list.
1. Ordinary Representation
2. XOR List Representation
XOR List Representation of doubly linked list.
Lets see the structure of each node of Doubly linked list and XOR linked list

Types of XOR Linked List:


There are two main types of XOR Linked List:
1. Singly Linked XOR List: A singly XOR linked list is a varia>on of the XOR linked list that
uses the XOR opera>on to store the memory address of the next node in a singly
linked list. In this type of list, each node stores the XOR of the memory address of
the next node and the memory address of the current node.
2. Doubly Linked XOR List: A doubly XOR linked list is a varia>on of the XOR linked list
that uses the XOR opera>on to store the memory addresses of
the next and previous nodes in a doubly linked list. In this type of list, each node
stores the XOR of the memory addresses of the next and previous nodes.
Traversal in XOR linked list:
Two types of traversal are possible in XOR linked list.
1. Forward Traversal
2. Backward Traversal:
Forward Traversal in XOR linked list:
When traversing the list forward, it’s important to always keep the memory address of
the previous element. Address of previous element helps in calcula>ng the address of
the next element by the below formula:
address of next Node = (address of prev Node) ^ (both)
Here, “both” is the XOR of address of previous node and address of nextnode.
Backward Traversal in XOR linked list:
When traversing the list backward, it’s important to always keep the memory address of
the next element. Address of next element helps in calcula>ng the address of
the previous element by the below formula:
address of previous Node = (address of next Node) ^ (both)
Here, “both” is the XOR of address of previous node and address of nextnode.

Note: The List should be printed in both forward and backward direc>on.
Examples
Input: head= 40<->30<->20<->10
Output: 40 30 20 10
10 20 30 40
Input: head= 5<->4<->3<->2<->1
Output: 5 4 3 2 1
12345
[Expected Approach] Using Bitwise XOR – O(n) Time and O(1) Space
We know that each node in a doubly-linked list has two pointer fields which contain the
addresses of the previous and next node. On the other hand, each node of the XOR linked
list requires only a single pointer field, which doesn’t store the actual memory addresses but
stores the bitwise XOR of addresses for its previous and next node.

// C program Implements a doubly linked


// list using XOR pointers
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>

struct Node {
int data;
struct Node* npx;
};

struct Node* createNode(int data);

// XOR func>on to get XOR of two pointers


struct Node* XOR(struct Node* a, struct Node* b) {
return (struct Node*)((uintptr_t)(a) ^ (uintptr_t)(b));
}

// Func>on to insert a node at the front of the list


struct Node* insert(struct Node* head, int data) {

// Create a new node with the given data


struct Node* new_node = createNode(data);

// Make the new node's npx point to the head


new_node->npx = XOR(head, NULL);

// Update npx of the head if it's not NULL


if (head != NULL) {
struct Node* next = XOR(head->npx, NULL);
head->npx = XOR(new_node, next);
}

// Return the new node as the new head


return new_node;
}

// Func>on to retrieve the list as an array


void getList(struct Node* head, int* arr, int* len) {
struct Node* curr = head;
struct Node* prev = NULL;
struct Node* next;

// Ini>alize array index


*len = 0;

while (curr != NULL) {

// Add current node's data to array


arr[(*len)++] = curr->data;

// Calculate the next node using XOR


next = XOR(prev, curr->npx);

// Update previous and current nodes


prev = curr;
curr = next;
}
}

struct Node* createNode(int data) {


struct Node* new_node =
(struct Node*)malloc(sizeof(struct Node));
new_node->data = data;
new_node->npx = NULL;
return new_node;
}

int main() {

// Create a hard-coded linked list:


// 40 <-> 30 <-> 20 <-> 10 (since we insert at the
// front)
struct Node* head = NULL;
int list[100];
int len, i;

head = insert(head, 10);


head = insert(head, 20);
head = insert(head, 30);
head = insert(head, 40);

getList(head, list, &len);

for (i = 0; i < len; ++i) {


prini("%d ", list[i]);
}
prini("\n");

for (i = len - 1; i >= 0; --i) {


prini("%d ", list[i]);
}
prini("\n");

return 0;
}

Here's a breakdown of the code: XOR function:


1. Casting to uintptr_t:
o (uintptr_t)(a) and (uintptr_t)(b) convert the pointers a and b into an
integer type that is guaranteed to be large enough to store a pointer
(i.e., uintptr_t).
2. Bitwise XOR Operation:
o The XOR operation (uintptr_t)(a) ^ (uintptr_t)(b) performs a
bitwise XOR between the two pointer values. In XOR-based linked
list implementations, this is often used to store the "address
difference" of two nodes.
3. Casting Back to Pointer:
o The result of the XOR operation is cast back to a pointer
type (struct Node*).

unrolled Linked List

Like array and linked list, the unrolled Linked List is also a linear data structure
and is a variant of a linked list.
Why do we need unrolled linked list?
One of the biggest advantages of linked lists over arrays is that inserting an
element at any location takes only O(1). However, the catch here is that to
search an element in a linked list takes O(n). So to solve the problem of
searching i.e reducing the time for searching the element the concept of unrolled
linked lists was put forward. The unrolled linked list covers the advantages of
both array and linked list as it reduces the memory overhead in comparison to
simple linked lists by storing multiple elements at each node and it also has the
advantage of fast insertion and deletion as that of a linked list.
Advantages:
• Because of the Cache behavior, linear search is much faster in unrolled
linked lists.
• In comparison to the ordinary linked list, it requires less storage space for
pointers/references.
• It performs operations like insertion, deletion, and traversal more quickly
than ordinary linked lists (because search is faster).
Disadvantages:
• The overhead per node is comparatively high than singly-linked lists.
Refer to an example node in the below code
Example: Let say we are having 8 elements so sqrt(8)=2.82 which rounds off to
3. So each block will store 3 elements. Hence, to store 8 elements 3 blocks will
be created out of which the first two blocks will store 3 elements and the last
block would store 2 elements.
How searching becomes better in unrolled linked lists?
So taking the above example if we want to search for the 7th element in the list
we traverse the list of blocks to the one that contains the 7th element. It takes
only O(sqrt(n)) since we found it through not going more than sqrt(n) blocks.
Skip List

A skip list is a data structure that allows for efficient search, insertion and
deletion of elements in a sorted list. It is a probabilistic data structure, meaning
that its average time complexity is determined through a probabilistic analysis.
In a skip list, elements are organized in layers, with each layer having a smaller
number of elements than the one below it. The bottom layer is a regular linked
list, while the layers above it contain “skipping” links that allow for fast
navigation to elements that are far apart in the bottom layer. The idea behind
this is to allow for quick traversal to the desired element, reducing the average
number of steps needed to reach it.
Skip lists are implemented using a technique called “coin flipping.” In this
technique, a random number is generated for each insertion to determine the
number of layers the new element will occupy. This means that, on average,
each element will be in log(n) layers, where n is the number of elements in the
bottom layer.
Skip lists have an average time complexity of O(log n) for search, insertion and
deletion, which is similar to that of balanced trees, such as AVL trees and red-
black trees, but with the advantage of simpler implementation and lower
overhead.
In summary, skip lists provide a simple and efficient alternative to balanced
trees for certain use cases, particularly when the average number of elements in
the list is large.
Can we search in a sorted linked list better than O(n) time? The worst-case
search time for a sorted linked list is O(n) as we can only linearly traverse the
list and cannot skip nodes while searching. For a Balanced Binary Search Tree,
we skip almost half of the nodes after one comparison with the root. For a
sorted array, we have random access and we can apply Binary Search on arrays.
Can we augment sorted linked lists to search faster? The answer is Skip List.
The idea is simple, we create multiple layers so that we can skip some nodes.
See the following example list with 16 nodes and two layers. The upper layer
works as an “express lane” that connects only the main outer stations, and the
lower layer works as a “normal lane” that connects every station. Suppose we
want to search for 50, we start from the first node of the “express lane” and
keep moving on the “express lane” till we find a node whose next is greater than
50. Once we find such a node (30 is the node in the following example) on
“express lane”, we move to “normal lane” using a pointer from this node, and
linearly search for 50 on “normal lane”. In the following example, we start from
30 on the “normal lane” and with linear search, we find
50.
What is the time complexity with two layers? worst-case time complexity is
several nodes on the “express lane” plus several nodes in a segment (A segment
is several “normal lane” nodes between two “express lane” nodes) of the
“normal lane”. So if we have n nodes on “normal lane”, ?n (square root of n)
nodes on “express lane” and we equally divide the “normal lane”, then there
will be ?n nodes in every segment of “normal lane”. ?n is an optimal division
with two layers. With this arrangement, the number of nodes traversed for a
search will be O(?n). Therefore, with O(?n) extra space, we can reduce the time
complexity to O(?n).
Advantages of Skip List:
• The skip list is solid and trustworthy.
• To add a new node to it, it will be inserted extremely quickly.
• Easy to implement compared to the hash table and binary search tree
• The number of nodes in the skip list increases, and the possibility of the
worst-case decreases
• Requires only ?(logn) time in the average case for all operations.
• Finding a node in the list is relatively straightforward.
Disadvantages of Skip List:
• It needs a greater amount of memory than the balanced tree.
• Reverse search is not permitted.
• Searching is slower than a linked list
• Skip lists are not cache-friendly because they don’t optimize the locality
of reference

You might also like