DS UNIT 1 NOTES
DS UNIT 1 NOTES
Linear data structures are commonly used for organising and manipulating data in a sequential fashion. Some of the most
common linear data structures include:
Arrays: A collection of elements stored in contiguous memory locations.
Linked Lists: A collection of nodes, each containing an element and a reference to the next node.
Stacks: A collection of elements with Last-In-First-Out (LIFO) order.
Queues: A collection of elements with First-In-First-Out (FIFO) order.
An array is a collection of items of same data type stored at contiguous memory locations.
Homogeneous Elements: All elements within an array must be of the same data type.
Contiguous Memory Allocation: In most programming languages, elements in an array are stored in contiguous (adjacent)
memory locations.
Zero-Based Indexing: In many programming languages, arrays use zero-based indexing, which means that the first element is
accessed with an index of 0, the second with an index of 1, and so on.
Random Access: Arrays provide constant-time (O(1)) access to elements. This means that regardless of the size of the array, it
takes the same amount of time to access any element based on its index.
2. Linked List
A Linked List is a linear data structure which looks like a chain of nodes, where each node contains a data field and a
reference(link) to the next node in the list. Unlike Arrays, Linked List elements are not stored at a contiguous location.
Common Features of Linked List:
Node: Each element in a linked list is represented by a node, which contains two components:
o Data: The actual data or value associated with the element.
o Next Pointer(or Link): A reference or pointer to the next node in the linked list.
Head: The first node in a linked list is called the “head.” It serves as the starting point for traversing the list.
Tail: The last node in a linked list is called the “tail.”
Doubly Linked Lists: In a doubly linked list, each node has two pointers: one pointing to the next node and one
pointing to the previous node. This bidirectional structure allows for efficient traversal in both directions.
Circular Linked Lists: A circular linked list is a type of linked list in which the first and the last nodes are also
connected to each other to form a circle, there is no NULL at the end.
3. Stack:
Stack is a linear data structure that follows LIFO (Last In First Out) Principle, the last element inserted is the first to be popped
out. It means both insertion and deletion operations happen at one end only.
Linear Search Algorithm (Sequential Search)
Types of Stack:
Fixed Size Stack : As the name suggests, a fixed size stack has a fixed size and cannot grow or shrink dynamically. If
the stack is full and an attempt is made to add an element to it, an overflow error occurs. If the stack is empty and an
attempt is made to remove an element from it, an underflow error occurs.
Dynamic Size Stack : A dynamic size stack can grow or shrink dynamically. When the stack is full, it automatically
increases its size to accommodate the new element, and when the stack is empty, it decreases its size. This type of stack
is implemented using a linked list, as it allows for easy resizing of the stack.
Queue Operations:
Enqueue(): Adds (or stores) an element to the end of the queue..
Dequeue(): Removal of elements from the queue.
Peek() or front(): Acquires the data element available at the front node of the queue without deleting it.
rear(): This operation returns the element at the rear end without removing it.
isFull(): Validates if the queue is full.
isNull(): Checks if the queue is empty.
What is Search?
Search is a process of finding a value in a list of values. In other words, searching is the process of locating given value position
in a list of values.
Step 4: If both are not matching, then compare search element with the next element in the list.
Step 5: Repeat steps 3 and 4 until the search element is compared with the last element in the list.
Step 6: If the last element in the list is also doesn't match, then display "Element not found!!!" and terminate the function.
Example
Consider the following list of element and search element...
Program:
#include<stdio.h> #include<conio.h>
void main(){
int list[20],size,i,sElement;
------------------------------------------------------------------------------------------------------------------------------------------
What is Search?
Search is a process of finding a value in a list of values. In other words, searching is the process of locating given value position
in a list of values.
void main()
{
int first, last, middle, size, i, sElement, list[100]; clrscr();
size; i++)
scanf("%d",&list[i]);
first = 0;
last = size - 1;
middle = (first+last)/2;
Sorting Logic
Following is the sample code for insrtion sort...
//Insertion sort logic
for i = 1 to size-1 { temp =
list[i];
j = i;
while ((temp < list[j]) && (j > 0)) { list[j] =
list[j-1];
j = j - 1;
}
list[j] = temp;
}
Program:
int main() {
int arr[] = { 12, 11, 13, 5, 6 };
int N = sizeof(arr) / sizeof(arr[0]);
return 0;
}
Output:
Unsorted array: 12 11 13 5 6
Sorted array: 5 6 11 12 13
Selection Sort
Selection Sort algorithm is used to arrange a list of elements in a particular order (Ascending or
Descending).
In selection sort, the first element in the list is selected and it is compared repeatedly with remaining all the
elements in the list.
If any element is smaller than the selected element (for Ascending order), then both are swapped.
Then we select the element at second position in the list and it is compared with remaining all elements in the
list. If any element is smaller than the selected element, then both are swapped. This procedure is repeated till the
entire list is sorted.
Step by Step Process
Step 1: Select the first element of the list (i.e., Element at first position in the list).
Step 2: Compare the selected element with all other elements in the list.
Step 3: For every comparision, if any element is smaller than selected element (for Ascending order), then these two are
swapped.
Step 4: Repeat the same procedure with next position in the list till the entire list is sorted.
Sorting Logic
Following is the sample code for selection sort...
int main() {
int arr[] = {64, 25, 12, 22, 11};
int n = sizeof(arr) / sizeof(arr[0]);
selectionSort(arr, n);
return 0;
}
Output
Original vector: 64 25 12 22 11
Sorted vector: 11 12 22 25 64
Bubble Sort:
Bubble Sort is a comparison based simple sorting algorithm that works by comparing the adjacent elements and swapping them if
the elements are not in the correct order.
Program:
int main() {
int arr[] = { 6, 0, 3, 5 };
int n = sizeof(arr) / sizeof(arr[0]);
return 0;
}
Output
0356
Time Complexity:
Best: O(n)
Average: O(n^2)
Worst: O(n^2)
Space Complexity:
What is Space complexity?
When we design an algorithm to solve a problem, it needs some computer memory to complete its execution. For any algorithm,
memory is required for the following purposes...
Memory required to store program instructions
Memory required to store constant values
Memory required to store variable values
And for few other things
Space complexity of an algorithm can be defined as follows...
Total amount of computer memory required by an algorithm to complete its execution is called as space complexity of that
algorithm
Generally, when a program is under execution it uses the computer memory for THREE reasons. They are as follows...
Instruction Space: It is the amount of memory used to store compiled version of instructions.
Environmental Stack: It is the amount of memory used to store information of partially executed functions at the time of
function call.
Data Space: It is the amount of memory used to store all the variables and constants.
To calculate the space complexity, we must know the memory required to store different datatype values (according to
the compiler). For example, the C Programming Language compiler requires the following...
2 bytes to store Integer value,
4 bytes to store Floating Point value, 1 byte to
a)
{
return a*a;
}
In above piece of code, it requires 2 bytes of memory to store variable 'a' and another 2 bytes of memory is used for return value.
That means, totally it requires 4 bytes of memory to complete its execution. And this 4 bytes of memory is fixed for any input
value of 'a'. This space complexity is said to be Constant Space Complexity.
Constant Space Complexity :If any algorithm requires a fixed amount of space for all input values then that space complexity is
said to be Constant Space Complexity
Example 2
A[], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++) sum =
sum + A[i];
return sum;
}
In above piece of code it requires
'n*2' bytes of memory to store array variable 'a[]'
return value.
That means, totally it requires '2n+8' bytes of memory to complete its execution. Here, the amount of memory depends on the
input value of 'n'. This space complexity is said to be Linear Space Complexity.
Linear Space Complexity :If the amount of space required by an algorithm is increased with the increase of input value, then that
space complexity is said to be Linear Space Complexity
Time Complexity
What is Time complexity?
Every algorithm requires some amount of computer time to execute its instruction to perform the task. This computer time
required is called time complexity.
Time complexity of an algorithm can be defined as follows...
The time complexity of an algorithm is the total amount of time required by an algorithm to complete its execution.
Generally, running time of an algorithm depends upon the following...
Whether it is running on Single processor machine or Multi processor machine.
Whether it is a 32 bit machine or 64 bit machine
Read and Write speed of the machine.
The time it takes to perform Arithmetic operations, logical operations, return value and assignment operations etc.,
Input data
Calculating Time Complexity of an algorithm based on the system configuration is a very difficult task because, the
configuration changes from one system to another system. To solve this problem, we must assume a model machine with specific
configuration. So that, we can able to calculate generalized time complexity according to that model machine.
To calculate time complexity of an algorithm, we need to define a model machine. Let us assume a machine with following
configuration...
1.Single processor machine
2.32 bit Operating System machine
3.It performs sequential execution
4.It requires 1 unit of time for Arithmetic and Logical operations
5.It requires 1 unit of time for Assignment and Return value
6.It requires 1 unit of time for Read and Write operations
Now, we calculate the time complexity of following example code by using the above defined model machine...
Example 1
Consider the following piece of code...
int sum(int a, int b)
{
return a+b;
}
In above sample code, it requires 1 unit of time to calculate a+b and 1 unit of time to return the value. That means, totally it takes
2 units of time to complete its execution. And it does not change based on the input values of a and b. That means for all input
values, it requires same amount of time i.e. 2 units.
Constant Time Complexity.:If any program requires fixed amount of time for all input values then its time complexity is said to
be Constant Time Complexity.
Example 2
Consider the following piece of code...
int sum(int A[], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++) sum =
sum + A[i];
return sum;
}
So above code requires '4n+4' Units of computer time to complete the task. Here the exact time is not fixed. And it changes
based on the n value. If we increase the n value then the time required also increases linearly.
Totally it takes '4n+4' units of time to complete its execution and it is Linear Time Complexity.
Linear Time Complexity :If the amount of time required by an algorithm is increased with the increase of input value then that
time complexity is said to be Linear Time Complexity
Best Case,Worst Case and Average Case Efficiencies:
Best Case: It is the minimum number of steps that can be executed for a given problem is know as Best Case.
Worst Case: It is the maximum number of steps that can be executed for a given problem is know as Worst case.
Average Case: It is the Average number of steps that can be executed for a given problem is know as Average Case.