0% found this document useful (0 votes)
20 views

DATA STRUCTURE LAB FILE

Uploaded by

adeshsingh824
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

DATA STRUCTURE LAB FILE

Uploaded by

adeshsingh824
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

DATA STRUCTURE LAB FILE

Submitted by
ADESH SINGH
2300950100003
STCS- A

BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
( DEPARTMENT OF SCIENCE & ENGINEERING)

MGM’S COLLEGE OF ENGINEERING AND TECHNOLOGY,


SEC- 62, Noida
INDEX
S.NO. OBJECTIVE PAGE NO.
I Implementing sorting Techniques 3 to 15
Bubble sort, Selection sort, insertion sort,
Quick sort
2 Implementing sorting Techniques 16 to 19
Binary search, Linear search

3 Operation on Stack 20 to 23
Push() and Pop()

4 Implementing queue 24 to 27
Using linked list

5 Implementation of circular linked list 28 to 32


Insertion and deletion

6 Traversing and searching in trees 33 to 37

7 Adjacency matrix representation of graphs 38 to 42


(direct and undirect)

Page no: 2
AIM: Implementing Sorting Techniques: Bubble sort, Selection sort, Insertion sort,
Quick sort.

SORTING is the process of arranging elements in a specific order, typically in


ascending or descending order. It is a fundamental operation in computer science
that helps organize data for efficient searching, comparison, and analysis. Sorting
algorithms, like bubble sort or quick sort, reorder elements in a list or array based on
a predefined criterion, such as numerical or lexicographical order.

BUBBLE SORT works on the repeatedly swapping of adjacent elements until they
are not in the intended order. It is called bubble sort because the movement of array
elements is just like the movement of air bubbles in the water. Bubbles in water rise up
to the surface; similarly, the array elements in bubble sort move to the end in each
iteration.

CHARACTERISTICS OF BUBBLE SORT:


1. Sorting Technique: Comparison-based, uses repeated swapping.
2. Time Complexity:
- Best case (already sorted): O(n) .
- Worst and average case: O(n 2 ) .

3. Space Complexity: O(1) , as it sorts in place without requiring extra space.

4. Stability : Bubble sort is stable; it maintains the relative order of equal


elements.
5. Adaptive: Can terminate early if the list becomes sorted before
completing all passes.

USAGE OF BUBBLE SORT


- Teaching basic sorting concepts due to it’s simplicity.
- Small datasets where efficiency is not critical.
- Demonstrating sorting mechanisms and stability.

ADVANTAGES OF BUBBLE SORT


1. Simple to Understand and Implement : Suitable for beginners learning
sorting algorithms.
Page no: 3
2. In-Place Sorting: Requires no extra memory beyond the input array.

3. Stability: Maintains relative order of duplicate elements.

4. Adaptive Nature: This can stop early if the array becomes sorted before
finishing all iterations.

DISADVANTAGES OF BUBBLE SORT


1. Inefficient for Large Datasets: High time complexity O(n 2 ) makes it

impractical for large inputs.

2. Slow Performance: Even in small datasets, multiple comparisons and

swaps are slower than other algorithms like quicksort or mergesort.

3. Excessive Comparisons and Swaps: This results in unnecessary

operations compared to more optimized algorithms.

Page no: 4
INPUT CODE

OUTPUT CODE
INSERTION SORT is a simple sorting algorithm that works by iteratively inserting
each element of an unsorted list into its correct position in a sorted portion of the list.
It is like sorting playing cards in your hands. You split the cards into two groups: the
sorted cards and the unsorted cards. Then, you pick a card from the unsorted group
and put it in the right place in the sorted group.

CHARACTERISTICS OF INSERTION SORT:

Page no: 5
1 . Sorting Technique: Insertion Sort is a comparison-based algorithm that

sorts elements by inserting them into their correct position in the sorted

portion of the array.


2 Time Complexity:
- Best Case (already sorted): O(n)
- Worst and Average Case: O(n 2 )

3. Space Complexity: O(1) , as it is an in-place sorting algorithm.

4. Stability: It is stable, meaning that it preserves the relative order of equal


elements.

5. Adaptive: It performs well with partially sorted or nearly sorted data,

making fewer shifts in those cases.

USAGE OF INSERTION SORT:


1 Small Datasets: It is efficient for small datasets, where the overhead of

more complex algorithms like Quick Sort is not justified.

2 Nearly Sorted Data: Insertion Sort is highly effective when the data is

already mostly sorted because it performs fewer operations.

3 Educational Purposes: Due to its simplicity, Insertion Sort is often used to

teach sorting concepts in introductory computer science courses.

4 Online Sorting: It can be used when data arrives sequentially and must be

sorted as it arrives (like in real-time applications).

ADVANTAGES OF INSERTION SORT:


1. Simplicity: It is easy to understand and implement, making it a good choice

for teaching the fundamentals of sorting.

2. Efficient for Small Data: For small datasets or nearly sorted data, Insertion

Sort is fast and has a low overhead compared to more complex algorithms.

Page no: 6
3. In-Place Sorting: It does not require additional memory, making it space-
efficient.

4. Stable Sort: It preserves the relative order of equal elements, which is

useful when sorting records with multiple fields.

5. Adaptive: It adapts well to partially sorted data, reducing the number of


operations needed.

DISADVANTAGES OF INSERTION SORT:


1 . Inefficient for Large Datasets: With a time complexity of O(n 2
) in the

worst and average cases, Insertion Sort becomes slow and inefficient for large

datasets.

2 . Excessive Comparisons and Shifts : For unsorted data, Insertion Sort

requires a large number of comparisons and shifts, making it slower than more
advanced sorting algorithms like Quick Sort and Merge Sort.

3 . Poor Performance on Random Data: It performs poorly on random

datasets, as it doesn't take advantage of any existing order in the data.

Page no: 7
IMPLEMENTATON

Page no: 8
OUTPUT CODE

Page no: 9
SELECTION SORT is a comparison-based sorting algorithm. It sorts an array by
repeatedly selecting the smallest (or largest) element from the unsorted portion and
swapping it with the first unsorted element. This process continues until the entire
array is sorted.

CHARACTERISTICS OF SELECTION SORT:

1. Sorting Technique: Comparison-based, uses selection and swapping.


2. Time Complexity:
- Best, Average, and Worst Case: O(n 2
).
3. Space Complexity: O(1) as it is an in-place sorting algorithm.

4. Stability: Selection Sort is **not stable**. It may change the relative order
of equal elements.

5. Adaptive: It is not adaptive, meaning it doesn’t perform better on partially


sorted data.

USAGE OF SELECTION SORT:


- Small Datasets: It is easy to implement but inefficient for large datasets due
to its O(n 2 ) complexity.
- Memory-Constrained Environments: As an in-place algorithm with O(1)
space complexity, Selection Sort is used in situations where memory is
limited.
- Educational Purposes: It is often used to introduce basic sorting algorithms
due to its simplicity.

ADVANTAGES OF SELECTION SORT:


1. Simple to Understand and Implement: It has a straightforward approach,
making it easy to implement for beginners.

Page no: 10
2. In-Place Sorting: It doesn’t require extra memory beyond the input array.
3. Works Well for Small Arrays: For small datasets, its simplicity can make it
acceptable in terms of performance.

DISADVANTAGES OF SELECTION SORT:


1. Inefficient for Large Datasets: Its O(n 2
) time complexity makes it
inefficient for larger datasets compared to more advanced algorithms like
Quick Sort or Merge Sort.
2. Not Stable: Selection Sort may not maintain the relative order of equal
elements, which can be important in some applications.
3. Excessive Comparisons: It makes n(n-1)/2 comparisons, even though the
array may already be partially sorted.

IMPLEMENTATION

Page no: 11
Page no: 12
OUTPUT CODE

QUICK SORT is an efficient, comparison-based sorting algorithm that uses a


divide-and-conquer strategy. It works by selecting a "pivot" element from the array,
partitioning the remaining elements into two subarrays (one with elements smaller
than the pivot and one with elements larger), and then recursively sorting the
subarrays.

CHARACTERISTICS OF QUICK SORT:


1.Sorting Technique: Divide-and-conquer strategy, with partitioning and
recursion.

2.Time Complexity:
- Best and Average Case: O(n log n)
- Worst Case: O(n 2 ) , occurs when the pivot is the smallest or largest
element repeatedly (e.g., sorted or reverse-sorted data).
3. Space Complexity:
- In-Place Version: O(log n) for recursion stack.
- Non-in-place versions may require additional memory for subarrays.

4. Stability: Quick Sort is not stable ( relative order of equal elements may
change).
5. Adaptive: Quick Sort does not adapt well to sorted or nearly sorted data,
requiring optimizations like random pivot selection.

USAGE OF QUICK SORT:


1. Large Datasets: Quick Sort is highly efficient for large datasets, especially
when average case O(n log n) performance is required.
2. Practical Applications: Used in many standard libraries due to its
efficiency.
3. Arrays: Preferred for in-place sorting of arrays when stability is not a
requirement.
Page no: 13
ADVANTAGES OF QUICK SORT:
1. Fast on Average: It is one of the fastest sorting algorithms in practice, with
an average time complexity of O(n log n) .

2. In-Place Sorting: It requires minimal additional memory for the in-place


version.
3. Versatile: It works well on most data distributions with proper pivot
selection.
4. Widely Used: Highly optimized and implemented in many standard libraries
(like C++ STL's `std::sort`).
DISADVANTAGES OF QUICK SORT:
1. Worst-Case Complexity: O(n 2
) in the worst case, though this can be
mitigated with random pivot selection.
2. Not Stable: It does not maintain the relative order of equal elements, which
can be a problem in some cases.
3.Recursive Nature: Can cause stack overflow for large datasets if recursion
is not optimized (e.g., using tail recursion).

IMPLEMENTATION

Page no: 14
Page no: 15
AIM : Implementing Searching Techniques: Binary search, linear search

SEARCHING is the process of finding a specific element or value within a


collection of data, such as an array, list, or database. The two most commonly used
searching techniques are LINEAR SEARCH and BINARY SEARCH .

TYPES OF SEARCHING:
1. LINEAR SEARCH: This is the simplest searching algorithm. In linear

search, the algorithm starts at the beginning of the list and checks each
element one by one until the desired element is found or the entire list is
traversed.
- Time Complexity:
- Worst-case: O(n)
- Best-case: O(1) ( if the element is found at the first position ) -
Average-case: O(n)
- Usage: This is used when the data is unsorted or when the dataset is
small and quick results are not critical.

IMPLEMENTATION

Page no: 16
OUTPUT

2. BINARY SEARCH: This efficient searching algorithm works only on sorted


arrays. It repeatedly divides the search interval in half. If the value of the target
element is less than the value at the midpoint of the array, the search
continues in the lower half, otherwise, it continues in the upper half.
- Time Complexity:
- Worst-case: O(log n)
- Best-case: O(1) (if the target is at the midpoint )
- Average-case: O(log n)

-Usage: Binary search is used when the dataset is sorted,


providing faster search results than linear search, especially in large
datasets.

Page no: 17
IMPLEMENTATION

Page no: 18
OUTPUT

OTHER SEARCHING ALGORITHMS:

1. HASHING: A more advanced searching technique, hashing uses a hash


function to map the search element to a specific index in a hash table. It allows
for constant time complexity O(1) for search operations.
- Usage: Frequently used in databases and caching systems.

2 . JUMP SEARCH: This search works on sorted arrays and is similar to binary
search but instead of dividing the list in half, it jumps ahead by a fixed step size
(like the square root of the array size).
- Time Complexity: O(sqrt{n})

3 . EXPONENTIAL SEARCH: Used when the range of the dataset is unbounded


or for very large datasets. It starts by checking exponentially increasing indices,
then performs a binary search within the range once it finds a possible interval.
- Time Complexity: O(log n)

Page no: 19
AIM : Operations on Stack: Push() and pop()

STACK is a linear data structure that follows the Last In, First Out (LIFO)
principle. In a stack, the element added most recently is the first one to be removed.
You can think of a stack like a stack of plates, where you add new plates to the top
and also remove them from the top.

BASIC OPERATIONS ON STACK:


1 . Push :
○ Description : Adds an element to the top of the stack.
○ Operation : Push(stack, item) where item is the element to be
added.
○ Time Complexity : O(1)
2. Pop :
○ Description : Removes the top element from the stack.
○ Operation : Pop(stack)
○ Time Complexity : O(1)
○ Note : This operation also returns the element that was removed.
3 . Peek (or Top) :
○ Description : Returns the top element of the stack without removing it.
○ Operation : Peek(stack)
○ Time Complexity : O(1)
4. IsEmpty :
o Description : Checks whether the stack is empty.
o Operation : IsEmpty(stack)
o Time Complexity: O(1)
o Return : Returns True if the stack is empty, otherwise False
.
5. Size:
○ Description : Returns the number of elements in the stack.
○ Operation: Size(stack)
○ Time Complexity: O(1)

Page no: 20
STACK REPRESENTATION:
Stacks can be implemented using two types of data structures:

1. Array-Based Stack: An array is used to store elements. An index pointer


keeps track of the top element.
2. Linked List-Based Stack : A linked list is used where each node points to the
next node, and the stack is managed by the head pointer.
STACK USE CASES:
1. Function Call Stack : When a function is called in programming, the
function’s state ( including local variables and return addresses) is stored in the
stack.
2. Expression Evaluation : Infix, postfix, and prefix expressions are evaluated
using stacks.
3. Undo Mechanism in Software : Stacks are used to implement undo
operations, where the most recent action can be undone.
4. Browser History : The browser's back button functionality uses a stack to
keep track of visited pages.

ADVANTAGES OF STACK:
● Simple : Operations are easy to understand and implement.
● Efficient : Constant time complexity for push, pop, and peek operations.
● Memory-efficient : It doesn’t require extra space compared to some other
data structures like linked lists (in array-based stacks).

DISADVANTAGES OF STACK:
● Limited Access: Only the top element can be accessed. If you need to access
elements further down, you have to pop elements off the stack.
● Fixed Size: In array-based stacks, the size is fixed at the time of creation,
leading to a risk of overflow if the stack grows beyond its capacity.

Page no: 21
IMPLEMENTATION

Page no: 22
OUTPUT

Page no: 23
AIM: Implementing queue using linked list.

QUEUE is a linear data structure that follows the First In, First Out (FIFO)
principle. In a queue, the element added first will be the first one to be removed,
similar to a line of people waiting for a service. Elements are added at the rear (tail)
and removed from the front (head).

OPERATIONS IN A QUEUE:
1 . ENQUEUE :
○ Description : Adds an element to the rear of the queue.

○ Operation : Enqueue(queue, item) where item is the element to add.


○ Time Complexity : O(1)

2. DEQUEUE :
○ Description: Removes an element from the front of the queue.
○ Operation: Dequeue(queue)
○ Time Complexity: O(1)
○ Note: If the queue is empty, a dequeue operation results in an underflow
condition.

3 . PEEK (OR FRONT) :


○ Description : Returns the element at the front of the queue without removing it.
○ Operation : Peek(queue)
○ Time Complexity : O(1)

4. ISEMPTY :
○ Description : Checks whether the queue is empty.

○ Operation : IsEmpty(queue)
○ Time Complexity : O(1)

5. ISFULL (for fixed-size queues):


○ Description : Checks whether the queue is full.

○ Operation : IsFull(queue)
○ Time Complexity : O(1)

TYPES OF QUEUES:
1. Simple Queue :
○ Elements are added at the rear and removed from the front.
Page no: 24
○ Has a fixed size.
2. Circular Queue :
○ A circular queue connects the rear to the front, allowing efficient use of
space.
○ The queue wraps around once the end of the array is reached.
3. Priority Queue :
○ Elements are dequeued based on their priority rather than their order of
arrival.
4. Double-Ended Queue (Deque) :
○ Elements can be added or removed from both ends (front and rear).

APPLICATIONS OF QUEUES:

1. Task Scheduling : Used in job scheduling and CPU task management.

2. Data Streaming : Handling real-time data streams like in networks or media


players.
3. Breadth-First Search (BFS) : Queues are essential in graph traversal
algorithms.
4. Printing Tasks: Print queues manage documents sent to a printer.

ADVANTAGES OF QUEUES:

1. Efficient Data Management: Useful for sequential data processing.

2. FIFO Nature: Ensures fairness in resource allocation ( e.g., task scheduling).

3. Simple Implementation: Easy to implement with arrays or linked lists.

DISADVANTAGES OF QUEUES:

1. Fixed Size: In array-based queues, the size is fixed and can lead to overflow.

2. Limited Access: You can only access the front and rear elements directly.

3. Inefficiency in Simple Queues: Space can be wasted when elements are

dequeued and the rear pointer cannot wrap around (solved in circular

queues).
Page no: 25
IMPLEMENTATION

Page no: 26
OUTPUT

Page no: 27
AIM: Implementation of Circular linked list: Insertion and deletion.

LINKED LIST is a linear data structure where elements, called nodes , are
connected through pointers. Unlike arrays, linked lists do not require a contiguous
memory location, making them dynamic in size and efficient for certain operations.

STRUCTURE OF A LINKED LIST:


1. Node :
○ Data : The value stored in the node.
○ Next : A pointer to the next node in the list.
2. Head :
○ A pointer to the first node in the list.

TYPES OF LINKED LISTS:


1. Singly Linked List:
○ Each node points to the next node.
○ Traversal is one-directional.
2. Doubly Linked List :
○ Each node contains pointers to both its previous and next nodes. ○
Allows traversal in both directions.
3. Circular Linked List :
○ The last node points back to the first node, forming a circular
structure.

BASIC OPERATIONS:
1. Insertion :
○ Add a new node to the list ( at the beginning, end, or a specific position
).
2. Deletion:
○ Remove a node ( by value, position, or at the beginning or end ).
3. Traversal:
○ Visit each node in the list to display or process data.
4. Search :
○ Find a node containing a specific value.

ADVANTAGES OF LINKED LISTS:


1. Dynamic Size: Can grow or shrink as needed.

Page no: 28
2. Efficient Insertion/Deletion : Does not require shifting elements, unlike
arrays.
3. Memory Utilization : Uses only as much memory as required for the data.
DISADVANTAGES OF LINKED LISTS:
1. Extra Memory: Requires additional memory for pointers.
2. Sequential Access: Cannot access elements randomly; must traverse the list.
3. Complex Implementation: More difficult to implement compared to arrays.

APPLICATIONS OF LINKED LISTS:


1. Dynamic Memory Allocation: Used in operating systems and memory
management.
2. Data Structures: Foundation for stacks, queues, hash tables, etc.
3. Undo Mechanisms: In applications like text editors.
. Graph Representations : Represent adjacency lists in graphs.

Page no: 29
IMPLEMENTATION

Page no: 30
Page no: 31
OUTPUT

Page no: 32
AIM: Traversing and Searching in Trees.

TREE is a hierarchical data structure that consists of nodes connected by edges,


where each node represents an entity, and the edges define the relationships
between them. The structure resembles an inverted tree, with a single root node at
the top and branches extending to child nodes, forming levels of hierarchy. This
unique organization allows trees to represent relationships such as parent-child,
ancestry, or dependency effectively. Trees are extensively used in computer science
for tasks like storing hierarchical data (e.g., file systems), enabling efficient searches
(as in binary search trees), and managing dynamic data structures like heaps and
tries. The versatility and clarity of the tree structure make it a fundamental concept
in data organization and algorithm design.

KEY TERMS IN A TREE:


1. Root: The topmost node in a tree. It has no parent.
2. Parent: A node that has one or more child nodes.
3. Child: A node that descends from a parent node.
4. Leaf: A node with no children.
5 . Edge : The connection between two nodes.
6 . Subtree : A tree formed by a node and its descendants.
7 . Height of a Tree : The length of the longest path from the root to a leaf.
8. Depth of a Node : The number of edges from the root to the node.
9. Degree of a Node : The number of children a node has.

PROPERTIES OF TREES:
1 . Acyclic: Trees do not contain cycles.
2 . Connected: All nodes are connected by edges.
3 . Hierarchical Structure: Represents parent-child relationships.
4 One Root : A tree has exactly one root node.
5 . Unique Path: There is exactly one path between any two nodes.

TYPES OF TREES:
1. Binary Tree :
○ Each node can have at most two children.
○ Common types:

Page no: 33
■ Full Binary Tree: Every node has 0 or 2 children.
■ Complete Binary TreE: All levels except possibly the last are filled, and
nodes are as left as possible.
■ Perfect Binary Tree: All internal nodes have two children, and all
leaves are at the same level.
2. Binary Search Tree (BST) :
○ A binary tree where:
■ Left child nodes have values smaller than the parent.
■ Right child nodes have values greater than the parent.
3. Balanced Tree :
○ A tree where the difference between the heights of left and right subtrees
for any node is at most 1.
4. General Tree :
○ A tree where nodes can have any number of children.
5. AVL Tree :
○ A self-balancing binary search tree.
6. Heap :
○ A complete binary tree where:
■ Max-Heap: Parent nodes are larger than their children.
■ Min-Heap: Parent nodes are smaller than their children.

BASIC OPERATIONS:
1 . Traversal :
○ Visit each node in a specific order:

■ Preorder: Visit root, traverse left subtree, traverse right subtree.


■Inorder: Traverse left subtree, visit root, traverse right subtree.
■ Postorder: Traverse left subtree, traverse right subtree, visit root.
■ Level-order: Traverse nodes level by level.
2. Insertion :
○ Add a new node to the tree while maintaining its properties.
3. Deletion:

Page no: 34
○ Remove a node and adjust the tree structure to maintain its properties.
4. Searching :
○ Find a node with a specific value.
5. Height Calculation :
○ Compute the height of the tree.

APPLICATIONS OF TREES:
1. Hierarchical Data Representation :
○ File systems, organizational charts.
2. Searching and Sorting :
○ Binary search trees, heaps.
3. Parsing Expressions :
○ Expression trees in compilers.
4. Networking :
○ Spanning trees in computer networks.
5. Database Management :
○ B-Trees and B+ Trees for indexing.

ADVANTAGES OF TREES:
● Flexible structure for representing hierarchical data.
● Efficient searching, insertion, and deletion in binary search trees.
● Optimal for certain algorithms like Huffman coding.

DISADVANTAGES OF TREES:
● Can become unbalanced, leading to inefficiency in operations.
● Complex to implement compared to simpler data structures like arrays or
linked lists.

Page no: 35
IMPLEMENTATION

Page no: 36
OUTPUT

Page no: 37
AIM: Adjacency Matrix Representation of Graphs ( Directed and Undirected )

Adjacency matrix is a 2D array used to represent graphs, where each cell


matrix [i][j] indicates the presence (or absence) of an edge between vertex iii and
vertex jjj. It is commonly used to store graph information in both directed and
undirected graphs .

1. DIRECTED GRAPH
In a directed graph, edges have a direction and matrix [ i][j] indicates an edge
from vertex i to vertex j.

EXAMPLE DIRECTED GRAPHS

ADJACENCY MATRIX

IMPLEMENTATION

Page no: 38
OUTPUT

Page no: 39
1. UNDIRECTED GRAPH
In an undirected graph, edges have no direction, and
matrix[i][j]=1 and
matrix[j][i]=1 represent a bidirectional edge between i and j.

EXAMPLE UNDIRECTED GRAPH:

ADJACENCY MATRIX:

IMPLEMENTATION

Page no: 40
OUTPUT
Page no: 41
Page no: 42

You might also like