OS Lab Experiments
OS Lab Experiments
The aim of this experimen it is to familiarize students with system calls in the Unix/Linux
environment by implementing and verifying several fundamental system calls using C
programming language. Specifically, the experiment aims to implement getpid(), getppid(),
opendir(), readdir(), closedir(), and fork() system calls, and validate their functionality
through appropriate program execution and output analysis.
2. Requirements of the Experiment
A Unix/Linux system environment (like Ubuntu or CentOS).
C programming environment (GCC compiler)..
getppid()
step 1:
step 2: Define a function get_parent_process_id that calls the getppid() system call.
step 3: Use the get_parent_process_id function in the main function to print the parent
process ID
opendir()
step 1: Include the necessary header files.
step 2: Define a function open_directory that calls the opendir() function.
1
step 3: Use the open_directory function in the main function to open a directory and
handle errors.
step 4: If the directory is successfully opened, close it using closedir()
readdir()
step 1: Include the necessary header files.
step 2: Define a function read_directory that calls the readdir() function to read
directory entries.
step 3: Use the opendir function to open a directory, and then use the read_directory
function to read and print the entries.
step 4: Close the directory stream using closedir().
closedir()
step 1: Include the necessary header files.
step 2: Define a function close_directory that calls the closedir() function.
step 3: Use the opendir() function to open a directory, then use the closedir() function
to close it.
step 4: Handle errors appropriately if the directory cannot be opened or closed.
fork()
step 1: Include the necessary header files.
step 2: Define a function create_process that calls the fork() function.
step 3: In the create_process function, handle the return value of fork() to differentiate
between the parent and child processes.
step 4: In the main function, call create_process and handle the behavior for both
parent and child processes.
step 5: Properly handle errors if fork() fails.
5. Sample Output/Result of the Experiment
1. getpid()
The process ID is: 12345
2. getppid()
The parent process ID is: 6789
3. opendir()
This will execute the program and print the names of all files and directories in
the current directory. Adjust directory_name to open other directories as needed.
4. readdir()
This will execute the program and print the names of all files and directories
in the current directory. Adjust directory_name to open other directories as
needed.
5. closedir()
This will execute the program, which opens and reads the names of files
and directories in the current directory, and then closes the directory.
1. fork()
Hello from Parent Process! Child PID: 12345
Hello from Child Process! PID: 12345
6.Inferences Obtained from the Experiment
2
Understanding of how system calls interface with the operating system kernel.
Recognition of the differences between various system calls in terms of their
functionality and usage.
Insight into the process creation mechanism and parent-child relationships in
Unix/Linux.
7. Viva Questions for the Experiment
Sample Viva Questions:
1. What is the purpose of system calls in an operating system?
o Answer: System calls provide an interface between user-level programs and
the operating system kernel, allowing programs to request services like I/O
operations, process management, and file system access.
2. Explain the difference between getpid() and getppid() system calls.
o Answer: getpid() returns the process ID of the current process, whereas
getppid() returns the process ID of the parent process.
3. How does fork() system call work?
o Answer: fork() creates a new process (child process) that is a copy of the
calling process. The child process runs concurrently with the parent process
and typically continues executing from the point where fork() was called.
3
System calls like open(), read(), write(), and close() are fundamental interfaces provided by
the operating system kernel to manage files.
open() is used to open or create a file and returns a file descriptor.
read() reads data from an open file descriptor into a buffer.
write() writes data from a buffer to an open file descriptor.
close() closes a file descriptor, releasing associated resources.
These system calls are crucial for performing file operations in Unix/Linux environments and
provide efficient mechanisms for handling file input and output operations in C programs.
4. Procedure/Step-by-Step Instructions
b. open()
step 1: Include the necessary headers for open() and related constant.
step 2: Define variables to store file descriptors and other parameters.
step 3: Use the open() function to open or create the file.
step 4: Perform operations on the file using the file descriptor returned by
open().
step 5: After finishing operations, close the file using close().
c. read()
step 1: Include the necessary headers for read() and related functions.
step 2: Define variables to store the file descriptor, buffer for data, and other
parameters.
step 3: Use the open() function to open the file for reading.
step 4: Use the read() function to read data from the file into the buffer.
step 5: After finishing reading, close the file using close().
d. write()
i. Include the necessary headers for write() and related functions.
ii.
iii. Define variables to store the file descriptor, buffer containing data, and other
parameters.
iv. Use the open() function to open or create the file for writing.
v. Use the write() function to write data from the buffer to the file.
vi. After finishing writing, close the file using close().
o Close()
step 1: Include the necessary headers for close() and related functions.
step 2: Define variables to store the file descriptor and any other necessary
parameters.
step 3: Use the open() function to open the file.
step 4: Perform any necessary operations using the file descriptor (fd), such as
reading or writing data.
step 5: After finishing operations, close the file using close().
5. Sample Output/Result of the Experiment
4
It will create (or truncate) the file example.txt,
write "Hello, World!\n" to it, read it back, and then
print "Hello, World!\n" as the output.
6.Inferences Obtained from the Experiment
Understanding of file handling system calls and their respective functionalities.
Insight into the role of file descriptors in file management.
Practical experience in implementing file operations using system calls in C.
7. Viva Questions for the Experiment
Sample Viva Questions:
1. Explain the purpose of file descriptors in Unix/Linux operating systems.
o Answer: File descriptors are integer identifiers used by the operating system to
uniquely identify open files or other I/O resources. They facilitate
communication between processes and the kernel for file operations.
2. What are the differences between open () and fopen() functions in C?
o Answer: open () is a system call that directly interacts with the operating
system to open or create files, returning a file descriptor. fopen() is a standard
library function that provides buffered I/O and returns a FILE pointer.
3. How does the read () system call handle file input operations?
o Answer: read () reads data from an open file descriptor into a buffer specified
by the caller. It returns the number of bytes read or -1 on error and advances
the file pointer.
5
2. Requirements of the Experiment
Unix/Linux operating system environment (like Ubuntu or CentOS).
C programming environment with GCC compiler.
Text editor for writing C code.
Terminal for compiling and running programs.
Basic understanding of process scheduling concepts such as arrival time, burst time,
waiting time, turnaround time, and context switching.
3. Theoretical Background of the Experiment
CPU scheduling is a key component of operating systems responsible for deciding which
process to execute next on the CPU.
FCFS schedules processes in the order they arrive, without considering burst times.
RR allocates a fixed time slice (quantum) to each process in a cyclic manner.
SJF selects the process with the shortest burst time first, minimizing average waiting
time.
These algorithms differ in their approach to prioritizing processes, impacting overall system
performance and efficiency. Implementing and comparing these algorithms provides insights
into their strengths and weaknesses under different workload scenarios.
4. Procedure/Step-by-Step Instructions
Implementing Algorithms:
o Implement functions for FCFS, RR, and SJF scheduling algorithms.
o Define structures or arrays to represent processes with attributes like arrival
time, burst time, waiting time, and turnaround time.
o FCFS:
1- Input the processes along with their burst time (bt).
2- Find waiting time (wt) for all processes.
3- As first process that comes need not to wait so waiting time for
process 1 will be 0 i.e. wt[0] = 0.
4- Find waiting time for all other processes i.e. for
process i ->
wt[i] = bt[i-1] + wt[i-1] -(at[i]-at[i-1]);
21
5- Find turnaround time =waiting_time + burst_time for all processes.
6- Find average waiting time = total_waiting_time / no_of_processes.
7- Similarly, find average turnaround time = total_turn_around_time /
no_of_processes.
o sjf Algorithm:
1. Sort all the processes according to the arrival time.
2. Then select that process which has minimum arrival time and minimum
Burst time.
3. After completion of process make a pool of processes which after till the
completion of previous process and select that process among the pool which
is having minimum Burst time.
6
a. Completion Time: Time at which process completes its execution.
rb. Turn Around Time: Time Difference between completion time and arrival
time. Turn Around Time = Completion Time –Arrival Time
c. Waiting Time (W.T): Time Difference between turnaround time and burst
time.
Waiting Time = Turnaround Time – Burst Time
o ROUND ROBIN Algorithm:
* The CPU scheduler picks the process from the circular/ready queue, set a
timer to interrupt it after 1 time slice / quantum and dispatches it.
* If process has burst time less than 1 time slice/quantum
> Process will leave the CPU after the completion
> CPU will proceed with the next proc
ess in the ready queue / circular queue.
else If process has burst time longer than 1 time
slice/quantum
> Timer will be stopped. It causes interruption to the OS.
p > Executed process is then placed at the tail of the circular / ready queue by
applying the
context switch
> CPU scheduler then proceeds by selecting the next process in the ready
queue.
1. Completion Time: Time at which process completes its execution.
2. Turn Around Time: Time Difference between completion time and arrival
time. Turn Around Time =
Completion Time – Arrival Time
3. Waiting Time (W.T): Time Difference between turnaround time and burst
time.
Waiting Time = Turnaround Time – Burst Time
7
8. Inferences Obtained from the Experiment
Comparison of average waiting time and turnaround time for FCFS, RR, and SJF
algorithms.
Understanding of how scheduling policies impact CPU utilization and efficiency.
Insight into the suitability of each algorithm based on process characteristics like burst
times.
9. Viva Questions for the Experiment
1. Explain the FCFS scheduling algorithm and its limitations.
o Answer: FCFS schedules processes in the order of their arrival. It is simple to
implement but may lead to poor average waiting times, especially if long
processes arrive first (convoy effect).
2. How does the Round Robin (RR) scheduling algorithm work? Discuss its
parameters and impact.
o Answer: RR allocates a fixed time slice (quantum) to each process in a cyclic
manner. It ensures fairness among processes but may result in higher context
switching overhead and longer response times for short processes.
3. What is the difference between preemptive and non-preemptive SJF scheduling?
o Answer: Non-preemptive SJF selects the next process based on its burst time
without preemption, while preemptive SJF may preempt a running process if a
shorter job arrives, optimizing for shorter average waiting times
8
Exp.No. 4 Implement a C-program for Priority scheduling Date:
algorithm.
9
step 3: Now further processes will be schedule according to the arrival time and
priority of the process. (Here we are assuming that lower the priority number
having higher priority). If two process priority are same, then sort according to
process number.
step 4: Note: In the question, they will clearly mention, which number will have
higher priority and which number will have lower priority.
step 5: Once all the processes have been arrived, we can schedule them based on their
priority.
5. Sample Output/Result of the Experiment
Upon execution, the program should demonstrate:
The order of process execution based on their assigned priority values.
Metrics such as average turnaround time and average waiting time for the simulated
processes.
Input :
process no-> 1 2 3 4 5
arrival time-> 0 1 3 2 4
burst time-> 3 6 1 2 4
priority-> 3 4 9 7 8
Output :
1 0 3 3 3 0
2 1 6 9 8 2
3 3 1 16 13 12
4 2 2 11 9 7
5 4 4 5g 15 11 7
10
Understanding of how priority scheduling affects system responsiveness and
throughput.
Insight into the trade-offs between preemptive and non-preemptive priority
scheduling approaches.
11
1. Aim of the Experiment
The aim of this experiment is to implement a C program that simulates the classic producer-
consumer problem using semaphores. The objective is to demonstrate how semaphores can
be utilized to synchronize access to a shared buffer between producer and consumer
processes. This experiment aims to illustrate the importance of synchronization mechanisms
in concurrent programming and how semaphores can prevent issues such as race conditions
and buffer overflows.
2. Requirements of the Experiment
Unix/Linux operating system environment (such as Ubuntu or CentOS).
C programming environment with GCC compiler.
Text editor for writing C code.
Terminal for compiling and running programs.
Basic understanding of semaphores and concurrent programming concepts.
3. Theoretical Background of the Experiment
The producer-consumer problem involves two processes, a producer and a consumer, sharing
a common fixed-size buffer. The producer generates data items and places them into the
buffer, while the consumer removes items from the buffer and processes them. To avoid race
conditions where the consumer tries to consume from an empty buffer or the producer tries to
produce into a full buffer, synchronization mechanisms like semaphores are used.
Semaphores allow mutual exclusion (mutex) and synchronization among processes by
controlling access to shared resources. In this experiment, semaphores will be employed to
ensure that producers and consumers access the buffer in a mutually exclusive and
coordinated manner, thereby preventing conflicts and maintaining data integrity.
4. Procedure/Step-by-Step Instructions
1. Algorithm/Description
Step 1: Start the program.
Step 2: Declare and initialize the necessary variables.
Step 3: Create a Producer.+
Step 4: Producer (Child Process) performs a down operation and writes a message.
Step 5: Producer performs an up operation for the consumer to consume.
Step 6: Consumer (Parent Process) performs a down operation and reads or consumes
the data
(message)
Step 7: Consumer then performs an up operation.
Step 8: Stop the program.
5. Sample Output/Result of the Experiment
Upon execution, the program should demonstrate:
Sequential production and consumption of items in the buffer by producers and
consumers.
Correct synchronization ensuring that producers do not produce into a full buffer and
consumers do not consume from an empty buffer.
Producer produced-0
Producer produced-1
12
Consumer consumed-0
Consumer consumed-1
Producer produced-2
6.Inferences Obtained from the Experiment
Understanding of how semaphores facilitate synchronization and mutual exclusion in
concurrent programs.
Insight into handling the producer-consumer problem to prevent race conditions and
ensure data integrity.
Practical experience in implementing synchronization mechanisms using semaphores
in C.
2. Viva Questions for the Experiment
1. What is the producer-consumer problem, and why is synchronization necessary
to solve it?
o Answer: The producer-consumer problem involves two processes sharing a
common buffer. Synchronization is necessary to prevent issues such as race
conditions where the consumer might access an empty buffer, or the producer
might overwrite existing data in a full buffer.
2. Explain the role of semaphores in the producer-consumer problem.
o Answer: Semaphores are used to enforce mutual exclusion and
synchronization between the producer and consumer processes accessing a
shared buffer. They ensure that only one process can access the buffer at a
time, preventing conflicts and maintaining data integrity.
3. What are the differences between binary semaphores and counting semaphores?
o Answer: Binary semaphores (mutex) have two states (0 and 1) and are used for
mutual exclusion, typically to protect shared resources. Counting semaphores can
have a value greater than 1 and are used for tasks like resource management or
limiting the number of concurrent accesses.
Exp.No. 6 Develop a C program to provide synchronization Date:
among the 5 philosophers in Dining Philosophers
problem using semaphore.
13
3. Theoretical Background of the Experiment
The dining philosopher’s problem used to demonstrate the concept of deadlock that is -
another classic synchronization problem which is used to evaluate situations where there is a
need of allocating multiple resources to multiple processes.
The Dining Philosopher Problem – The Dining Philosopher Problem states that K
philosophers seated around a circular table with one chopstick between each pair of
philosophers. There is one chopstick between each philosopher. A philosopher may eat if he
can pick up the two chopsticks adjacent to him. One chopstick may be picked up by any one
of its adjacent followers but not both. df
14
Step 10: Hopefully no philosophers should starve to death (i.e. wait over a certain
amount of time before she acquires both chopsticks).
15
proceed, leading to a system halt. Semaphore-based synchronization ensures
that such issues are mitigated by controlling access to shared resources.
Exp.No. 7 Implement a C-program to handle deadlock using Date:
Bankers Algorithm
16
● It is a 2-d array of size ‘n*m’ that defines the maximum demand of each
process in a system.
● Max[ i, j ] = k means process Pi may request at most ‘k’ instances of
resource type Rj.
Allocation :
● It is a 2-d array of size ‘n*m’ that defines the number of resources of each
type currently allocated to each process.
● Allocation[i,j] = k means process Pi is currently allocated ‘k’ instances of
resource type Rj
Need :
● It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of
each process.
● Need [ i, j ] = k means process Pi currently need ‘k’ instances of resource
type Rj for its execution.
● Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
Allocationi specifies the resources currently allocated to process P i and Needi specifies
the additional resources that process Pi may still request to complete its task.
Banker’s algorithm consists of Safety algorithm and Resource request algorithm
Safety Algorithm
The algorithm for finding out whether or not a system is in a safe state can be described as
follows:
1) Let Work and Finish be vectors of length ‘m’ and ‘n’ respectively.
Initialize: Work = Available
Finish[i] = false; for i=1, 2, 3, 4….n
2) Find an i such that both
a) Finish[i] = false
b) Needi <= Work
if no such i exists goto step (4)
3) Work = Work + Allocation[i]
Finish[i] = true
goto step (2)
4) if Finish [i] = true for all i
then the system is in a safe state
Resource-Request Algorithm
Let Requesti be the request array for process P i. Requesti [j] = k means process P i
wants k instances of resource type Rj. When a request for resources is made by
process Pi, the following actions are taken:
1) If Requesti <= Needi
Goto step (2) ; otherwise, raise an error condition, since the process has exceeded its
maximum claim.
2) If Requesti <= Available
17
Goto step (3); otherwise, Pi must wait, since the resources are not available.
3) Have the system pretend to have allocated the requested resources to process P i by
modifying the state as
follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi– Requesti
5. Sample Output/Result of the Experiment
Upon execution, the program should demonstrate:
Successful allocation and deallocation of resources by processes using Banker's
Algorithm.
Output indicating the state transitions of processes, including resource requests,
grants, and releases.
Verification that the system remains in a safe state throughout the execution,
preventing deadlock situations.
Enter the number of resources: 4
Enter the number of processes: 5
Enter Claim Vector: 8 5 9 7
Enter Allocated Resource Table: 2 0 1 1 0 1 2 1 4 0 0 3 0 2 1 0 1 0 3 0
Enter Maximum Claim table: 3 2 1 4 0 2 5 2 5 1 0 5 1 5 3 0 3 0 3 3
The Claim Vector is: 8 5 9 7
The Allocated Resource Table:
2011
0121
4003
0210
1030
The Maximum Claim Table:
3214
0252
5105
1530
3033
Allocated resources: 7 3 7 5
Available resources: 1 2 2 2
Process3 is executing.
The process is in safe state.
Available vector: 5 2 2 5
Process1 is executing.
The process is in safe state.
18
Available vector: 7 2 3 6
Process2 is executing.
The process is in safe state.
Available vector: 7 3 5 7
Process4 is executing.
The process is in safe state.
Available vector: 7 5 6 7
Process5 is executing.
The process is in safe state.
Available vector: 8 5 9 7
19
Exp.No. 8 Implement a C-program to demonstrate the first fit and Date:
best fit algorithm
ALGORITHM: FIRST-FIT
Step 1: Include the necessary header files required.
Step 2: Declare the variables needed.
Step 3: Read the number of blocks and the size of each block.
Step 4: Read the number of process and the size of each process.
Step 5: Check if the process size is less than or equal to block size.
Step 6: If yes, assign the corresponding block to the current process.
Step 7: Else print the current process is not allocated.
5. Sample Output/Result of the Experiment
21
Upon execution, the program should demonstrate:
Allocation and deallocation of memory blocks using First Fit and Best Fit algorithms.
Output showing how each algorithm handles memory requests and manages
fragmentation.
Metrics such as average memory utilization and fragmentation levels for comparison
between the two algorithms.
MEMORY MANAGEMENT SCHEME - BEST FIT
Enter No. of Blocks: 5
Enter the 0st block size: 500
Enter the 1st block size: 100
Enter the 2st block size: 250
Enter the 3st block size: 650
Enter the 4st block size: 850
Enter No. of Process: 5
Enter the size of 0st process: 450
Enter the size of 1st process: 605
Enter the size of 2st process: 820
Enter the size of 3st process: 110
Enter the size of 4st process: 230
Process Block Size
820 850
605 650
450 500
230 250
110 100
OUTPUT:
MEMORY MANAGEMENT SCHEME - FIRST FIT
Enter No. of Blocks: 5
Enter the 0st block size: 120
Enter the 1st block size: 230
Enter the 2st block size: 340
Enter the 3st block size: 450
Enter the 4st block size: 560
Enter No. of Process: 5
Enter the size of 0st process: 530
Enter the size of 1st process: 430
Enter the size of 2st process: 630
Enter the size of 3st process: 203
Enter the size of 4st process: 130
Process Block Size
530 120
430 230
22
630 340
203 450
130 560
The process 3 [size 203] allocated to oololllkblock 230
The process 4 [size 130] allocated to block 340
The process 1 [size 430] allocated to block 450
The process 0 [size 530] allocated to block 560
The process 2 is not allocated.
23
Exp.No. 9 Implement a C-program for illustrating Page replacement Date:
algorithms
a) First in First Out (FIFO)
b)
c) Least Recently Used (LRU)
d) Optimal
In virtual memory management, page replacement algorithms determine which page to evict
from memory when a new page needs to be loaded and memory is full. Three commonly
used algorithms are FIFO, LRU, and Optimal:
FIFO (First in First Out): This algorithm replaces the oldest page in memory,
regardless of how frequently or infrequently it is used.
LRU (Least Recently Used): This algorithm replaces the page that has not been used
for the longest period of time.
Optimal: This theoretical algorithm replaces the page that will not be used for the
longest period in the future. It serves as a benchmark for comparison against practical
algorithms.
These algorithms aim to minimize page faults (instances where a page required by a process
is not available in memory) and improve overall system performance by optimizing the use of
available memory resources.
4. Procedure/Step-by-Step Instructions
24
Step-by-Step Procedure:
1. Algorithm/Description
A) FIRST IN FIRST OUT (FIFO) PAGE REPLACEMENT ALGORITHM:
Step1: Start
Step2: Read no of frames and reference
Step3: Read the frame list
Step4: Copy the reference list into stack
Step5: Insert the frame number into frame by FIFO
Step6: Display the frame stack S
Step7: Stop
B) LEAST RECENTLY USED(LRU) PAGE REPLACEMENT ALGORITHM:
Step1: Start
Step2: Read no of frames and reference and reference list values
Step3: Insert the element into the frame by least recently used
Step4: While inserting an element i, the frame contents also having the same element i
occur then print no
page fault occurs
Step5: Otherwise print page fault occurs and continue from step3 until reference list
number becomes zero
Step6: Stop
C) OPTIMAL PAGE REPLACEMENT ALGORITHM:
Step1: Start
Step2: Read the number of frames, reference and reference list
Step3: Replace the page fault that will not be used for longer period of time
Step4: Count and print the no. of page faults occur
Step5: Display the reference list stack
Step6: Stop
5. Sample Output/Result of the Experiment
Upon execution, the program should demonstrate:
bs average memory utilization and execution ugu time.
Output indicating which algorithm performs best under specific conditions of page
referencing patterns and memory sizes.
Enter no of pages:18
Enter reference string: 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7
Enter no of frames:3
7
70
701
201
25
201
203
203
243
243
243
203
203
203
201
201
201
201
207
total no of page faults=9
26
requires additional overhead to track page usage. Optimal provides the best
possible performance benchmark but is impractical to implement in real
systems due to its requirement of future page reference knowledge.
27
In this scheme, each file occupies a contiguous set of blocks on the disk. For example, if a
file requires n blocks and is given a block b as the starting location, then the blocks assigned
to the file will be: b, b+1, b+2,……b+n-1. This means that given the starting block address
and the length of the file (in terms of blocks required), we can determine the blocks occupied
by the file.
The directory entry for a file with contiguous allocation contains
· Address of starting block
· Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks.
Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.
4. Procedure/Step-by-Step Instructions
Step-by-Step Procedure:
1. Implementing Contiguous File Allocation:
STEP 1: Start the program.
STEP 2: Gather information about the number of files.
STEP 3: Gather the memory requirement of each file.
STEP 4: Allocate the memory to the file in a sequential manner.
STEP 5: Select any random location from the available location.
STEP 6: Check if the location that is selected is free or not.
STEP 7: If the location is allocated set the flag = 1.
STEP 8: Print the file number, length, and the block allocated.
STEP 9: Gather information if more files must be stored.
STEP 10: If yes, then go to STEP 2.
1. STEP 11: If no, Stop the program. Simulating File Operations:
o Create a simulation where files are created, read, updated, and deleted using
the contiguous allocation method.
o Track file allocations, disk space utilization, and fragmentation issues.
5 . Sample Output/Result of the Experiment
Upon execution, the program should demonstrate:
Allocation of contiguous blocks for new files.
Output indicating file allocation status, including allocated blocks and free space.
Evaluation of fragmentation levels and their impact on storage efficiency and file
access performance.
INPUT:
Enter no of files :3
Enter file name 1 :A
Enter starting block of file 1 :85
Enter no of blocks in file 1 :6
Enter file name 2 :B
Enter starting block of file 2 :102
Enter no of blocks in file 2 :4
Enter file name 3 : C
28
Enter starting block of file 3 : 60
Enter no of blocks in file 3 : 4
Enter the file name to be searched : B
OUTPUT:
FILE NAME START BLOCK NO OF BLOCKS BLOCKS OCCUPIED
B 102 4 102, 103, 104, 105
6.Inferences Obtained from the Experiment
Evaluation of disk space utilization efficiency under the contiguous file allocation
method.
Comparison of file fragmentation levels and their impact on file access speed and
storage management.
Insight into the advantages (e.g., fast access) and disadvantages (e.g., fragmentation)
of using contiguous allocation for file storage.
29
Exp.No. 11 Implement a C-program for demonstrating Disk Date:
scheduling algorithms:
a. FCFS
b. SCAN
c. C-SCAN
1. Aim of the Experiment
The aim of this experiment is to implement and compare three disk scheduling algorithms —
First-Come, First-Served (FCFS), SCAN, and C-SCAN — in a C program. The objective is
to simulate how these algorithms manage the movement of disk arms and optimize disk
access time for read and write operations. This experiment aims to illustrate the differences in
efficiency, performance, and behavior of these algorithms under varying scenarios of disk
request queues.
2. Requirements of the Experiment
Unix/Linux operating system environment (such as Ubuntu or CentOS).
C programming environment with GCC compiler.
Text editor for writing C code.
Terminal for compiling and running programs.
Basic understanding of disk scheduling algorithms, disk I/O operations, and data
structures.
3. Theoretical Background of the Experiment
Disk scheduling algorithms determine the order in which disk I/O requests are serviced by the
disk arm. Three common algorithms are:
FCFS (First-Come, First-Served): This algorithm services requests in the order they
arrive. It is straightforward but may result in longer seek times if requests are far apart
on the disk.
6=t[n/
SCAN: Also known as Elevator algorithm, SCAN services requests in one direction
until the end of the disk is reached, then reverses direction. It reduces seek times by
preventing the disk arm from having to travel across the entire disk repeatedly.
C-SCAN: C-SCAN is a variant of SCAN where the disk arm "scans" the disk in one
direction only and when it reaches the end of the disk, it jumps to the beginning of the
disk without servicing requests on the return trip. This reduces variance in service
time compared to SCAN.
4. Procedure/Step-by-Step Instructions
Step-by-Step Procedure:
1. Algorithm/Description:
(i) First Come First Serve (FCFS)
This algorithm entertains requests in the order they arrive in the disk queue.
Example: Consider a disk queue with requests for I/O to blocks on cylinders 98, 183,
41, 122, 14,124, 65, 67. The head is initially at cylinder number 53.
30
Algorithm
1. Start the program.
2. Mark the ‘head’ as the initial position of disk head.
3. Let request array represent an array storing indexes of track that have been
requested in
ascending order of their time of arrival.
4. One by one take the tracks in default order and calculate the absolute distance of
the track
from the head.
5. Increment the total seeks count with this distance.
6. New head position is currently serviced track position.
7. Go to step no. 3 until all track in request array have not been serviced.
Algorithm:
1. Start the program.
2. Mark the ‘head’ as the initial position of disk head.
3. Let request array represent an array storing indexes of track that have been
requested in
ascending order of their time of arrival.
4. Let direction represents whether the head is moving towards left or right.
31
5. In the direction in which head is moving service all tracks one by one.
6. Calculate the absolute distance of the track from the head.
7. Increments the total seek count with this distance.
8. Currently serviced track position now becomes the new head position.
9. Go to step 5 until we reach at one of the ends of the disk.
10. If reach at the end of the disk reverse the direction and go to step 4 until all tracks
in request array have not been serviced.
(iii) Circular- SCAN (C-SCAN) Algorithm
Circular-SCAN Algorithm is an improved version of the SCAN.
Head starts from one end of the disk and move towards the other end servicing all the
requests in between. After reaching the other end, head reverses its direction. It then returns
to the starting end without servicing any request in between. The same process repeats.
Example: Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41,
122, 14,124, 65, 67. The head is initially at cylinder number 53.
Algorithm:
1. Start the program.
2. Mark the ‘head’ as the initial position of disk head.
3. Let request array represent an array storing indexes of track that have been
requested in
ascending order of their time of arrival.
4. The head services only in the right direction from 0 to the size of the disk.
5. While moving in the left directions do not service any of the tracks.
6. When we reach the beginning (left end) reverse the direction.
7. While moving in the right direction it services all tracks one by one.
8. While moving in the right directions calculate the absolute distance of the track
from the head.
9. Increment the total seeks count with this distance.
10. Currently serviced track position now becomes the new head position.
11. Go to step 8 until we reach the right end of the disk.
12. If we reach the right end of the disk reverse the direction and go to step 5 until all
tracks in the request array have not bee n serviced.
5. Sample Output/Result of the Experiment
Upon execution, the program should demonstrate:
Total service time and average seek time for each disk scheduling algorithm.
Output showing the order in which disk requests are serviced and the movement of
the disk arm for each algorithm.
32
Comparison of performance metrics highlighting the effectiveness of SCAN and C-
SCAN in reducing seek times compared to FCFS.
Input Sample
Enter Number of Tracks: 10
Enter Track Position: 50 55 18 40 60 120 67 80 91 22
33
Exp.No. 12 Implement a Virtual File System (VFS) interface for your Date:
kernel, and a temporary memory-based file system (tmpfs)
that mounts as the root file system in C programming
language.
34
2. Define VFS Structures and Functions:
o Define structures such as inode, superblock, file_operations, and
inode_operations that represent file system entities and operations.
o Implement functions for file system operations like open, read, write, close,
mkdir, rmdir, etc., adhering to the VFS interface.
o pseudo code, this code doesn’t show crossing of mount point or relative
pathname:
int vfs_lookup(const char* pathname, struct vnode** target) {
auto vnode_itr = rootfs->root;
for (component_name : pathname) {
auto next_vnode;
auto ret = vnode_itr->v_ops->lookup(vnode_itr, next_vnode,
component_name);
if(ret != 0) {
return ret;
}
vnode_itr = next_vnode;
}
*target = vnode_itr;
return 0;
}
3. Implement tmpfs File System:
o Create a tmpfs-specific implementation that uses kernel memory for file and
directory storage.
o Implement functions to manage inode creation, file allocation, directory
operations, and memory management within tmpfs.
4. Integrate tmpfs as Root File System:
o Modify the kernel's boot sequence or configuration to mount tmpfs as the root
file system.
o Ensure initialization routines set up tmpfs structures and mount them
appropriately during kernel boot.
5. Test File System Operations:
o Write test applications or scripts to perform basic file operations (create files,
read/write data, create directories) on the tmpfs root file system.
o Verify that operations behave as expected and that data persistence (or lack
thereof) meets tmpfs characteristics.
6. Compile and Install the Custom Kernel:
o Compile the modified kernel with the integrated tmpfs support.
o Install the custom kernel and configure the bootloader to boot into the new
kernel image with tmpfs as the root file system.
7. Evaluate Performance and Functionality:
o Measure file system performance metrics such as file access speed, memory
usage, and CPU utilization under various workloads.
o Evaluate the reliability and limitations of tmpfs as a root file system,
considering its volatile nature and impact on system stability.
35
5. Sample Output/Result of the Experiment
Upon successful execution, the experiment should demonstrate:
Successful booting of the custom kernel with tmpfs mounted as the root file system.
Ability to perform file operations such as creating, reading, writing, and deleting files
and directories within tmpfs.
Output showing system resource utilization and performance benchmarks comparing
tmpfs to traditional file systems.
6.Inferences Obtained from the Experiment
Comparison of file system performance between tmpfs and traditional disk-based file
systems.
Assessment of memory usage and scalability of tmpfs for handling large numbers of
files and directories.
Insight into the benefits (speed, simplicity) and drawbacks (volatility, size limits) of
using a memory-based file system as the root file system.
36