Operating System (OS) Complete Notes With Most Imp. QNA
Operating System (OS) Complete Notes With Most Imp. QNA
1 The strategy of allowing processes that are logically runnable to be temporarily suspended
is called
A.preemptive scheduling
B. non preemptive scheduling
C. shortest job first
D.first come first served
E. None of the above
v Process is
A. program in High level language kept on disk
B. contents of main memory
C. a program in execution
D.a job in secondary memory
E. None of the above
3 Fork is
A. the dispatching of a task
B. the creation of a new job
C. the creation of a new processv
D. increasing the priority of a task
E. None of the above
4 Interprocess communication
A. is required for all processes
B. is usually done via disk drives
C. is never necessary,
D.allows processes to synchronize activity
5 The FIFO algorithm
A. executes first the job that last entered the queuev
B.executes first the job that first entered the queue
C. execute first the job that has been in the queue the longest
D.executes first the job with the least processor needs
E. None of the above
6 Inter-process communication
A. is required for all processes
B. is usually done via disk drivesv
C. is never necessary,
D.allows processes to synchronize activity
UNIT-1
PART – B: (Short Answer Questions)
1 What are the functions of operating system?
Functions of Operating system:
• Memory Management
• File Management
• Device Management
• I/O management
• Networking
• Security
• Processor Management
• Secondary storage management
• Common interpretation
When a user first turn on or booted the computer, it needs some initial program to run. This initial
program is known as Bootstrap Program. It is stored in read-only memory (ROM) or electrically
erasable programmable read-only memory (EEPROM). Bootstrap program locates the kernel and
loads it into main memory and starts its execution
Some popular Operating Systems include Linux Operating System, Windows Operating
System, VMS, OS/400, AIX, z/OS, etc.
5 What is a Kernel?
Kernel is an active part of an OS i.e., it is the part of OS running at all times.It is a programs which
can interact with the hardware. Ex: Device driver, dll files,system files etc.
Or
Kernel is the core part of an operating system which manages system resources. It also acts like a
bridge between application and hardware of the computer. It is one of the first programs loaded on
start-up (after the Bootloader)
Inter process communication (IPC) is used for exchanging data between multiple
threads in one or more processes or programs. The Processes may be running on
single or multiple computers connected by a network. The full form of IPC is Inter-
process communication.
• User mode and monitor mode are distinguished by a bit called the mode bit.
• User mode uses bit 1 and monitor mode uses bit 0.
• At the boot time hardware starts with the monitor mode.
• Also, at the time of interrupt user mode is shifted to the transfer mode.
• The system always switches to the user mode before passing control to the user
program.
• Whenever the system gains control of the computer it works in monitor mode
otherwise in user mode.
10 Explain PCB.
• For each process, the operating system maintains the data structure, which keeps the
complete information about that process. This record or data structure is called Process
Control Block (PCB).
• Whenever a user creates a process, the operating system creates the corresponding PCB for
that process. These PCBs of the processes are stored in the memory that is reserved for the
operating system.
• The process control block has many fields that store the relative information about that
process as you can see in the above figure. PCB contains Process-Id, Process State, Process
Priority, Accounting Information, Program Counter, and also some other information which
helps in controlling the operations of the process.
• All the nodes in the distributed system are connected to each other. So nodes
can easily share data with other nodes.
• More nodes can easily be added to the distributed system i.e. it can be scaled
as required.
• Failure of one node does not lead to the failure of the entire distributed
system. Other nodes can still communicate with each other.
• Resources like printers can be shared with multiple nodes rather than being
restricted to just one.
#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>
int main()
{
fork();
printf("Hello world!\n");
return 0;
}
Example of exec()
Main file:
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
int main()
{
char *args[]={"./EXEC",NULL};
execv(args[0],args);
printf("Ending-----");
return 0;
}
Caller file:
#include<stdio.h>
#include<unistd.h>
int main()
{
int i;
17 What is the function of following UNIX commands: vi, cat, pwd, psw.
cat: Concatenate files and print to stdout.
• Syntax: cat [OPTION]…[FILE]
• Example: Create file1 with entered cotent
• $ cat > file1
• Hello
• ^D
pwd: Print the present working directory
• Syntax: pwd [OPTION]
• Example: Print ‘dir1’ if a current working directory is dir1
• $ pwd
23 What are the primary differences between Network Operating System and
Distributed Operating System?
UNIT-1
PART – C: (Long Answer Questions)
Explain the various types of System calls with an example for each.
The interface between a process and an operating system is provided by system calls. In general, system
calls are available as assembly language instructions. They are also included in the manuals used by the
assembly level programmers.
System calls are usually made when a process in user mode requires access to a resource. Then it
requests the kernel to provide the resource via a system call.
There are mainly five types of system calls. These are explained in detail as follows –
1)Process Control
These system calls deal with processes such as process creation, process termination etc.
2)File Management
These system calls are responsible for file manipulation such as creating a file, reading a file, writing into
1
a file etc.
3)Device Management
These system calls are responsible for device manipulation such as reading from device buffers, writing
into device buffers etc.
4)Information Maintenance
These system calls handle information and its transfer between the operating system and the user
program.
5)Communication
These system calls are useful for interprocess communication. They also deal with creating and deleting
a communication connection.
Define operating system and list out the function and component of
operating system.
//components aur functions tha definition likhne ka mn kare to likhna nhi too bss list likh do…..
• Memory Management
=>Memory management refers to management of Primary Memory or
Main Memory. Main memory is a large array of words or bytes where each
word or byte has its own address.
• Processor Management
=> An Operating System does the following activities for processor management
−
• Keeps tracks of processor and status of process. The program responsible
for this task is known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.
• Device Management
• File Management
=> A file system is normally organized into directories for easy navigation
and usage. These directories may contain files and other directions.
• Security
1 Kernel
The kernel in the OS provides the basic level of control on all the computer
peripherals.
2 Process Execution
3 Interrupt
In the operating system, interrupts are essential because they give a reliable
technique for the OS to communicate & react to their surroundings.
4 Memory Management
5 Multitasking
6 Networking
Networking can be defined as when the processor interacts with each other
through communication lines. The design of communication-network must
consider routing, connection methods, safety, the problems of opinion & security.
7 Security
A GUI or user interface (UI) is the part of an OS that permits an operator to get the
information. A user interface based on text displays the text as well as its
commands which are typed over a command line with the help of a keyboard.
Process is a active
2. memory. memory.
4. entity. entity.
Process Control Block is a data structure that contains information of the process
related to it. The process control block is also known as a task control block, entry
of the process table, etc.
It is very important for process management as the data structuring for processes
is done in terms of the PCB. It also defines the current state of the operating
system.
Structure of the Process Control Block
The process control stores many data items that are needed for efficient process
management. Some of these data items are explained with the help of the given
diagram −
The following are the data items −
Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
This shows the number of the particular process.
Program Counter
This contains the address of the next instruction that needs to be executed in the
process.
Registers
This specifies the registers that are used by the process. They may include
accumulators, index registers, stack pointers, general purpose registers etc.
List of Open Files
These are the different files that are associated with the process
CPU Scheduling Information
The process priority, pointers to scheduling queues etc. is the CPU scheduling
information that is contained in the PCB. This may also include any other
scheduling parameters.
Memory Management Information
The memory management information includes the page tables or the segment
tables depending on the memory system used. It also contains the value of the
base registers, limit registers etc.
I/O Status Information
This information includes the list of I/O devices used by the process, the list of
files etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are
all a part of the PCB accounting information.
Location of the Process Control Block
The process control block is kept in a memory area that is protected from the
normal user access. This is done because it contains important process
information. Some of the operating systems place the PCB at the beginning of the
kernel stack for the process as it is a safe location.
Differentiate between long term scheduler and short term scheduler. What
is the purpose of medium term scheduler?
Difference Between Long-Term and Short-Term Scheduler:
5. processing. processing.
Speed is less than the short-term Speed is very fast as compared
3.DistributedOperatingSystem –
These types of operating system is a recent advancement in the world of
computer technology and are being widely accepted all-over the world and,
that too, with a great pace. Various autonomous interconnected computers
communicate each other using a shared communication network.
Independent systems possess their own memory unit and CPU.
4.NetworkOperatingSystem –
These systems run on a server and provide the capability to manage data, users,
groups, security, applications, and other networking functions. These type of
operating systems allow shared access of files, printers, security, applications, and
other networking functions over a small private network
A
Following are a few common services provided by an operating system −
• Program execution
=> Operating systems handle many kinds of activities from user programs
to system programs like printer spooler, name servers, file server, etc.
Each of these activities is encapsulated as a process.
• I/O operations
=> An I/O subsystem comprises of I/O devices and their corresponding
driver software. Drivers hide the peculiarities of specific hardware devices
from the users.
• Error Detection
=> Errors can occur anytime and anywhere. An error may occur in CPU, in
I/O devices or in the memory hardware. Following are the major activities
of an operating system with respect to error handling −
• Resource Allocation
• Protection
=> Considering a computer system having multiple users and concurrent
execution of multiple processes, the various processes must be protected from
each other's activities.
All other layers are assembled on top of the core element and are progressively removed from
interacting with the hardware. Each level communicates with the layer above or below it. At the
top is the user interface which presents the interface between the user and the software. When
a user executes a task, the command is transmitted through the different layers until it reaches
the correct one, for example, the processor.
Unit 2
1 Which module gives control of the CPU to the process selected by the short-term scheduler?
a) dispatcher b) interrupt c) scheduler d) none of the mentioned
2 The interval from the time of submission/response of a process to the time of completion is termed as
a) waiting time b) turnaround time c) response time d) throughput
3 In priority scheduling algorithm, when a process arrives at the ready queue, its priority is compared
with the priority of: a) all process b) currently running process c) parent process d) init process
6 Which module gives control of the CPU to the process selected by the short-term scheduler?
A. dispatcher
B. interrupt
C. scheduler
D. none of the mentioned
7 The processes that are residing in main memory and are ready and waiting to execute are kept on a list called:
A. job queue
B. ready queue
C. execution queue
D. process queue
8 The interval from the time of submission of a process to the time of completion is termed as:
A. waiting time
B. turnaround time
C. response time
D. throughput
A. time
B. space
C. money
D. All of these
14 Which of the following algorithms tends to minimize the process flow time ?
15 Under multiprogramming, turnaround time for short jobs is usually ________ and that for long jobs is slightly
___________.
A. Lengthened; Shortened
B. Shortened; Lengthened
C. Shortened; Shortened
D. Shortened; Unchanged
UNIT-2
PART – B: (Short Answer Questions)
Pre-emptive Scheduling means once a process started its execution, the currently running
process can be paused for a short period of time to handle some other process of higher
priority, it means we can pre-empt the control of CPU from one process to another if
required.
Non-Pre-emptive Scheduling means once a process starts its execution or the CPU is
processing a specific process, it cannot be halted or in other words we cannot preempt
(take control) the CPU to some other process.
15 What are the requirements that a solution to the critical section problem must
satisfy?
(Answer same as NO.13)
16 What are the benefits of multithreaded programming?
Responsiveness: Program responsiveness allows a program to run even if part of it is blocked using
multithreading.
Resource sharing: hence allowing better utilization of resources.
Economy: Creating and managing threads becomes easier.
Scalability: One thread runs on one CPU. In Multithreaded processes, threads can be distributed over a series of
processors to scale.
UNIT-2
PART – C: (Long Answer Questions)
Draw three Gantt charts for execution of the processes using SRTF, RR (Time quantum=2) and preemptive
priority scheduling. Separately compute average waiting time and average turnaround time of the processes on
execution of the three algorithms. [10]
Explain Peterson’s solution on critical section problem.
Peterson’s solution is a software based solution to the critical section problem. Consider two processes P0 and P1. For
convenience, when presenting Pi, we use Pi to denote the other process; that is, j == 1 - i.
The processes share two variables:
boolean flag [2] ;
A
int turn;
Initially flag [0] = flag [1] = false, and the value of turn is immaterial (but is either
0 or 1). The structure of process Pi is shown below.
do{
flag[i]=true
turn=j
while(flag[j] && turn==j);
critical section
flag[i]=false
Remainder section
}while(1);
To enter the critical section, process Pi first sets flag [il to be true and then sets turn to the value j, thereby asserting that
if the other process wishes to enter the critical section it can do so. If both processes try to enter at the same time, turn
will be set to both i and j at roughly the same time. Only one of these assignments will last; the other will occur, but
will be overwritten immediately. The eventual value of turn decides which of the two processes is allowed to enter its
critical section first.
We now prove that this solution is correct. We need to show that:
1. Mutual exclusion is preserved,
2. The progress requirement is satisfied,
3. The bounded-waiting requirement is met.
To prove property 1, we note that each Pi enters its critical section only if either flag [jl == false or turn == i. Also note
that, if both processes can be executing in their critical sections at the same time, then flag [i] ==flag [jl == true. These
two observations imply that P0 and P1 could not have successfully executed their while statements at about the same
time, since the value of turn can be either 0 or 1, but cannot be both. Hence, one of the processes say Pj-must have
successfully executed the while statement, whereas Pi had to execute at least one
additional statement ("turn == j"). However, since, at that time, flag [j] == true, and turn == j, and this condition will
persist as long as Pi is in its critical section, the result follows:
To prove properties 2 and 3, we note that a process Pi can be prevented from entering the critical section only if it is
stuck in the while loop with the condition flag [j] == true and turn == j; this loop is the only one. If Pi is not ready to
enter the critical section, then flag [ j ] == false and Pi can enter its critical section. If Pi has set flag[j] to true and is also
executing in its while statement, then either turn == I or turn == j. If turn == i, then Pi will enter the critical section. If
turn == j, then Pi will enter the critical section. However, once Pi exits its critical section, it will reset flag [ jl to false,
allowing Pi to enter its critical section. If Pi resets flag [ j 1 to true, it must also set turn to i. Thus, since Pi does not
change the value of the variable turn while executing the while statement, Pi will enter the critical section (progress)
after at most one entry by Pi (bounded waiting).
What do you mean by Semaphore? Discuss the Dining Philosophers problem using semaphore.
A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic
operations: wait and signal. These operations were originally termed P (for wait; from the Dutch proberen, to test) and
V (for signal; from verhogen, to increment). The classical definition of wait in pseudocode is
wait(S) {
while (S <= 0)
B
; // no-op
S --;
}
The classical definitions of signal in pseudocode is
Signal(S){
S++;
}
The Dining Philosopher Problem – The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be picked up by any one of
its adjacent followers but not both.
Dining-Philosophers Problem
• The structure of Philosopher i:
process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[(i+1)%5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[(i+1)%5])
}
There are three states of philosopher : THINKING, HUNGRY and EATING. Here there are two semaphores : Mutex
and a semaphore array for the philosophers.
What do you mean by binary semaphore and counting semaphore? Explain implementation of wait () and signal.
A Counting Semaphore:This type of Semaphore uses a count that helps task to be acquired or released numerous times.
If the initial count = 0, the counting semaphore should be created in the unavailable state. The value of the Counting
Semaphore can ranges over an unrestricted domain.
Binary Semaphore:The binary semaphores are quite similar to counting semaphores, but their value is restricted to 0
and 1. In this type of semaphore, the wait operation works only if semaphore = 1, and the signal operation succeeds
when semaphore= 0. It is easy to implement than counting semaphores.
Wait( ) Operation:This type of semaphore operation helps you to control the entry of a task into the critical section.
However, If the value of wait is positive, then the value of the wait argument X is decremented. In the case of
negative or zero value, no operation is executed. It is also called P(S) operation.After the semaphore value is
decreased, which becomes negative, the command is held up until the required conditions are satisfied.
Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}
Signal operation:This type of Semaphore operation is used to control the exit of a task from a critical section. It helps
to increase the value of the argument by 1, which is denoted as V(S).
Implementation of Signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
Consider the following five processes, with the length of the CPU burst time given in milliseconds.
Process Burst time P1 - 10, P2 - 29, P3 - 3, P4 – 7, P5 - 12 Consider the First come First serve (FCFS), Non
Preemptive Shortest Job First (SJF), Round Robin (RR) with (quantum=10ms) scheduling algorithms. Illustrate
the scheduling using Gantt chart.
• Which algorithm will give the minimum average waiting time?
Ans:The Gantt-chart for FCFS scheduling is
P1 P2 P3 P4 P5
0 10 39 42 49
61
Turnaround time = Finished Time – Arrival Time
Turnaround time for process P1 = 10 – 0 = 10
Turnaround time for process P2 = 39 – 0 = 39
Turnaround time for process P3 = 42 – 0 = 42
Turnaround time for process P4 = 49 – 0 = 49
Turnaround time for process P5 = 61 – 0 = 61
Average Turnaround time = (10+39+42+49+61)/5 = 40.2
The Gantt-chart for SJF scheduling is
P3 P4 P1 P5 P2
0 3 10 20 32
B
61
Turnaround time for process P1 = 3 – 0 = 3
Turnaround time for process P2 = 10 – 0 = 10
Turnaround time for process P3 = 20 – 0 = 20
Turnaround time for process P4 = 32 – 0 = 32
Turnaround time for process P5 = 61 – 0 = 61
Average Turnaround time = (3+10+20+32+61)/5 = 25.2
The Gantt-chart for RR scheduling is
P1 P2 P3 P4 P5 P2 P5
P2
0 10 20 23 30 40
50 52 61
Turnaround time for process P1 = 10 – 0 = 10
Turnaround time for process P2 = 61 – 0 = 61
Turnaround time for process P3 = 23 – 0 = 23
Turnaround time for process P4 = 30 – 0 = 30
Turnaround time for process P5 = 52 – 0 = 52
Average Turnaround time = (10+61+23+30+52)/5 = 44.2
So SJF gives minimum turnaround time.
Explain the different criteria used in operating system during scheduling and what importance they are having
A in choosing the optimal algorithm for a given snapshot?
Scheduling Criteria
1. CPU utilisation –
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilisation can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the load upon the system.
2. Throughput –
A measure of the work done by CPU is the number of processes being executed and completed per unit
time. This is called throughput. The throughput may vary depending upon the length or duration of
processes.
3. Turnaround time –
For a particular process, an important criteria is how long it takes to execute that process. The time
elapsed from the time of submission of a process to the time of completion is known as the turnaround
time. Turn-around time is the sum of times spent waiting to get into memory, waiting in ready queue,
executing in CPU, and waiting for I/O.
4. Waiting time –
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in the ready
queue.
5. Response time –
In an interactive system, turn-around time is not the best criteria. A process may produce some output
fairly early and continue computing new results while previous results are being output to the user.
Thus another criteria is the time taken from submission of the process of request until the first response
is produced. This measure is called response time.
Explore the Reader’s Writer’s problem and Producer Consumer problem by using Semaphore.
Reader’s Writer’s Problem:
→A data set is shared among a number of concurrent processes
– Readers – only read the data set, do not perform any updates
– Writers – can both read and write the data set (perform the updates).
• If two readers read the shared data simultaneously, there will be no problem. If both a reader(s)
and writer share the same data simultaneously then there will be a problem.
• In the solution of reader-writer problem, the reader process share the following data structures:
B Semaphore Mutex, wrt;
int readcount;
• Where →Semaphore mutex is initialized to 1.
→ Semaphore wrt is initialized to 1.
→ Integer readcount is initialized to 0.
do {
.
. PRODUCE ITEM
.
wait(empty);
wait(mutex);
.
. PUT ITEM IN BUFFER
.
signal(mutex);
signal(full);
} while(1);
In the above code, mutex, empty and full are semaphores. Here mutex is initialized to 1, empty is initialized to n
(maximum size of the buffer) and full is initialized to 0.
The mutex semaphore ensures mutual exclusion. The empty and full semaphores count the number of empty and full
spaces in the buffer.
After the item is produced, wait operation is carried out on empty. This indicates that the empty space in the buffer has
decreased by 1. Then wait operation is carried out on mutex so that consumer process cannot interfere.
After the item is put in the buffer, signal operation is carried out on mutex and full. The former indicates that consumer
process can now act and the latter shows that the buffer is full by 1.
Consumer Process:
do {
wait(full);
wait(mutex);
..
. REMOVE ITEM FROM BUFFER
.
signal(mutex);
signal(empty);
.
. CONSUME ITEM
.
} while(1);
The wait operation is carried out on full. This indicates that items in the buffer have decreased by 1. Then wait operation
is carried out on mutex so that producer process cannot interfere.
Then the item is removed from buffer. After that, signal operation is carried out on mutex and empty. The former
indicates that consumer process can now act and the latter shows that the empty space in the buffer has increased by 1.
What is a thread? Discuss and differentiate between user level and Kernel level thread with their advantages and
disadvantages. What are the different thread models we are having? Explain them in detail. .
[8]
Ans: A thread is a single sequence stream within in a process. It is also called lightweight processes. In a process,
threads allow multiple executions of streams. The CPU switches rapidly back and forth among the threads giving
illusion that the threads are running in parallel. A thread can be in any of several states (Running, Blocked, Ready or
Terminated). An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or
consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like
processes as a result threads shares with other threads their code section, data section, OS resources such as open files
and signals.
User - Level Threads
The user-level threads are implemented by users and the kernel is not aware of the existence of these threads. It handles
them as if they were single-threaded processes. User-level threads are small and much faster than kernel level threads.
They are represented by a program counter(PC), stack, registers and a small process control block. Also, there is no
kernel involvement in synchronization for user-level threads.
Advantages of User-Level Threads
Some of the advantages of user-level threads are as follows −
•User-level threads are easier and faster to create than kernel-level threads. They can also be more easily
managed.
A
• User-level threads can be run on any operating system.
• There are no kernel mode privileges required for thread switching in user-level threads.
• Multiple threads of the same process can be scheduled on different processors in kernel-level threads.
• The kernel routines can also be multithreaded.
• If a kernel-level thread is blocked, another thread of the same process can be scheduled by the kernel.
Disadvantages of Kernel-Level Threads
Some of the disadvantages of kernel-level threads are as follows −
• A mode switch to kernel mode is required to transfer control from one thread to another in a process.
• Kernel-level threads are slower to create as well as manage as compared to user-level threads.
Thread Models:
Many-To-One Model
• In the many-to-one model, many user-level threads are all mapped onto a single kernel thread.
• Thread management is handled by the thread library in user space, which is very efficient.
• However, if a blocking system call is made, then the entire process blocks, even if the other user threads
would otherwise be able to continue.
• Because a single kernel thread can operate only on a single CPU, the many-to-one model does not allow
individual processes to be split across multiple CPUs.
• Green threads for Solaris and GNU Portable Threads implement the many-to-one model in the past, but few
systems continue to do so today.
One-To-One Model:
• The one-to-one model creates a separate kernel thread to handle each user thread.
• One-to-one model overcomes the problems listed above involving blocking system calls and the splitting
of processes across multiple CPUs.
• However the overhead of managing the one-to-one model is more significant, involving more overhead
and slowing down the system.
• Most implementations of this model place a limit on how many threads can be created.
• Linux and Windows from 95 to XP implement the one-to-one model for threads.
Many-To-Many Model:
• The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel
threads, combining the best features of the one-to-one and many-to-one models.
• Users have no restrictions on the number of threads created.
• Blocking kernel system calls do not block the entire process.
• Processes can be split across multiple processors.
• Individual processes may be allocated variable numbers of kernel threads, depending on the number of CPUs
present and other factors.
What is the purpose of CPU Scheduling? Mention various scheduling criteria’s. Explain in brief various CPU
scheduling algorithm.
CPU Scheduler’s main objective is to increase system performance in accordance with the chosen set of criteria. It is
the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are
ready to execute and allocates CPU to one of them.
Various Scheduling Criteria:
1. CPU utilization- Keep the CPU as busy as possible.
2. Throughput – Number of processes that complete their execution per
time unit
3. Turnaround Time-The interval from the time of submission of a process
to the time of completion. Turnaround Time is the sum of the periods
spent waiting to get into memory, waiting in the ready queue, executing
on the CPU, and doing I/0 waiting time.
B
4. Waiting Time-sum of the periods spent waiting in the ready queue.
5. Response Time- – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment)
Various CPU Scheduling Algorithm:
First Come First Serve
First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling algorithm. In this type
of algorithm, the process which requests the CPU gets the CPU allocation first. This scheduling method can be managed
with a FIFO queue.
As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of the queue. So, when
CPU becomes free, it should be assigned to the process at the beginning of the queue.
Characteristics of FCFS method:
It offers non-preemptive and pre-emptive scheduling algorithm.
Jobs are always executed on a first-come, first-serve basis
It is easy to implement and use.
However, this method is poor in performance, and the general wait time is quite high.
Shortest Remaining Time
The full form of SRT is Shortest remaining time. It is also known as SJF preemptive scheduling. In this method, the
process will be allocated to the task, which is closest to its completion. This method prevents a newer ready state process
from holding the completion of an older process.
Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes from the round-robin
principle, where each person gets an equal share of something in turn. It is mostly used for scheduling algorithms in
multitasking. This algorithm method helps for starvation free execution of processes.
Characteristics of Round-Robin Scheduling
Round robin is a hybrid model which is clock-driven
Time slice should be minimum, which is assigned for a specific task to be processed. However, it may vary for different
processes.
It is a real time system which responds to the event within a specific time limit.
Shortest Job First
SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the shortest execution time
should be selected for execution next. This scheduling method can be preemptive or non-preemptive. It significantly
reduces the average waiting time for other processes awaiting execution.
Characteristics of SJF Scheduling
It is associated with each job as a unit of time to complete.
In this method, when the CPU is available, the next process or job with the shortest completion time will be executed
first.
It is Implemented with non-preemptive policy.
This algorithm method is useful for batch-type processing, where waiting for jobs to complete is not critical.
It improves job output by offering shorter jobs, which should be executed first, which mostly have a shorter turnaround
time.
Multiple-Level Queues Scheduling
This algorithm separates the ready queue into various separate queues. In this method, processes are assigned to a queue
based on a specific property of the process, like the process priority, size of the memory, etc.
However, this is not an independent scheduling OS algorithm as it needs to use other types of algorithms in order to
schedule the jobs.
Characteristic of Multiple-Level Queues Scheduling:
Multiple queues should be maintained for processes with some characteristics.
Every queue may have its separate scheduling algorithms.
Priorities are given for each queue.
UNIT 3 COMPLETE SOLVED
1 Thrashing
A.is a natural consequence of virtual memory systems
B. can always be avoided by swapping
C. always occurs on large computers
D.can be caused by poor paging algorithms
E. None of the above
2 Memory
A.is a device that performs a sequence of operations specified by instructions in memory.
B. is the device where information is stored
C. is a sequence of instructions
is typically characterized by interactive processing and time-slicing of the CPU's time to allow quick
D.
response to each user.
E. None of the above
3 The principle of locality of reference justifies the use of
A.reenterable
B. non reusable
C. virtual memory
D.cache memory
E. None of the above
4 Thrashing can be avoided if
A.the pages, belonging to the working set of the programs, are in main memory
B. the speed of CPU is increased
C. the speed of I/O processor is increased
D.all of the above
E. None of the above
5 Fragmentation of the file system
A.occurs only if the file system is used improperly
B. can always be prevented
C.can be temporarily removed by compaction
D.is a characteristic of all file systems
E. None of the above
6 The memory allocation scheme subject to "external" fragmentation is
A.segmentation
B. swapping
C. pure demand paging
D.multiple contiguous fixed partitions
E. None of the above
7 Page stealing
A.is a sign of an efficient system
B. is taking page frames from other working sets
C. should be the tuning goal
D.is taking larger disk spaces for pages paged out
E. None of the above
8 A page fault
A.is an error is a specific page
B. occurs when a program accesses a page of memory
C.is an access to a page not currently in memory
D.is a reference to a page belonging to another program
E. None of the above
9 Which of the following statements is false?
A.a small page size causes large page tables
B. internal fragmentation is increased with small pages
C. a large page size causes instructions and data that will not be referenced brought into primary storage
D.I/O transfers are more efficient with large pages
E. None of the above
10 An impulse turbine is used for
a. Low head of water
b. High head of water
c. Medium head of water
d. High discharge
11 The address of a page table in memory is pointed by:
a. stack pointer
b. page table base register
c. page register
d. program counter
12 What is compaction?
a. a technique for overcoming internal fragmentation
b. a paging technique
c. a technique for overcoming external fragmentation
d. a technique for overcoming fatal error
13 With relocation and limit registers, each logical address must be _______ the limit
register.
a. Less than
b. equal to
c. greater than
d. None of these
14 The first fit, best fit and worst fit are strategies to select a ______.
a. Process from a queue to put in memory
b. Processor to run the next process
c. Free hole from a set of available holes
d. All of these
B. LRU
C. LFU
D. Working set
Basic It is the virtual address generated by CPU The physical address is a location in a
memory unit.
Address Space Set of all logical addresses generated by Set of all physical addresses mapped to the
Visibility The user can view the logical address of The user can never view physical address
a program. of program
Access The user uses the logical address to The user can not directly access physical
Generation The Logical Address is generated by the Physical Address is Computed by MMU
CPU
In short: a 1 in valid-invalid bit signifies that the page is in memory and 0 signifies that the page may be invalid or haven't
brought into the memory just yet.
5 Consider a logical address space of 128 pages of 1024 words each mapped onto a physical memory of 64 frames. How
many bits are there in logical and physical address?
Wait-for-graph is one of the methods for detecting the deadlock situation. This method is suitable for smaller database. In this
method a graph is drawn based on the transaction and their lock on the resource. If the graph created has a closed loop or a
cycle, then there is a deadlock.
9 Define Priming in pump and why it is necessary?
10 Define slip and percentage of slip.
Internal fragmentation happens when the External fragmentation happens when the method or
The solution of internal fragmentation is best-fit Solution of external fragmentation is compaction, paging
4. divided into fixed sized partitions. variable size partitions based on the size of processes.
The difference between memory allocated and The unused spaces formed between non-contiguous
required space or memory is called Internal memory fragments are too small to serve a new process, is
A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued
execution.
Similar to round-robin CPU-scheduling algorithm , when a quantum expires, the memory manager will swap out that process
to swap another process into the memory space that has been freed .
Need of swapping: It is required when there is a need of better processing speed and the RAM is not enough to handle that.
Although, there comes a point when even swapping does not work properly. Here is when one would require to enhance the
RAM and everything would be back to normal.
15 What are the approaches we follow for the address binding?
A user program will go through several steps, before being executed as shown. Address binding of instructions and data to
memory addresses can happen at three different stages
– Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location
changes
– Load time: Must generate relocatable code if memory location is not known at compile time
– Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment
to another. Need hardware support for address maps (e.g., base and limit registers)
16 What is the role of a page table in paging?
Page Table is a data structure used by the virtual memory system to store the mapping between logical addresses and physical
addresses. Logical addresses are generated by the CPU for the pages of the processes therefore they are generally used by the
processes. Physical addresses are the actual frame address of the memory. They are generally used by the hardware or more
specifically by RAM subsystems.
17 Differentiate between the basic and limit registers.
The base registers indicate where the page table starts in memory (this can be either a physical or logical
addresses) and the limit register indicates the side of the table. The registers are usually not loaded directly. Their values are
usually written to the hardware Process Context Block (PCB).
18 What is the purpose of wait for graph? Justify your answer.
Wait-for-graph is one of the methods for detecting the deadlock situation. This method is suitable for smaller database. In this
method a graph is drawn based on the transaction and their lock on the resource. If the graph created has a closed loop or a
cycle, then there is a deadlock.
19 What are the necessary conditions for deadlock?
The resource allocation graph is the pictorial representation of the state of a system. As its name suggests, the resource
allocation graph is the complete information about all the processes which are holding some resources or waiting for some
resources. It also contains the information about all the instances of all the resources whether they are available or being used
by the processes.In Resource allocation graph, the process is represented by a Circle while the Resource is represented by a
rectangle. Let's see the types of vertices and edges in detail.
21 What is the basic approach of Page Replacement?
• A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to
guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs
(primary storage and processor time) of the algorithm itself.
If no frame is free is available, find one that is not presently being used and free it. A frame can be freed by writing its
contents to swap space, and changing the page table to show that the page is no longer in memory. Now the freed frame can be
used to hold the page for which the process faulted.
22 What are the major problems to implement Demand Paging?
The two major problems to implement demand paging is developing
a. Frame allocation algorithm
b. Page replacement algorithm
23 Give the basic concepts about paging.
Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory. This scheme
permits the physical address space of a process to be non – contiguous.
24 Explain the basic concepts of segmentation.
In Operating Systems, Segmentation is a memory management technique in which, the memory is divided into the variable
size parts. Each part is known as segment which can be allocated to a process.The details about each segment are stored in a
table called as segment table. Segment table is stored in one (or many) of the segments.
25 Differentiate local and global page replacement algorithm.
Local replacement means that an incoming page is brought in only to the relevant process address space. Global
replacement policy allows any page frame from any process to be replaced. The latter is applicable to variable partitions
model only.
26 How is memory protected in a paged environment?
Memory protection is a way to control memory access rights on a computer, and is a part of most modern instruction set
architectures and operating systems. The main purpose of memory protection is to prevent a process from
accessing memory that has not been allocated to it.
Memory protection in a paged environment is accomplished by protection bits that are associated with each frame
NO ANSWERS FOUND.
Write about segmentation with example discuss basic difference between paging and segmentation?
Segmentation
Segmentation is a memory management technique in which each job is divided into several segments of different sizes,
2 one for each module that contains pieces that perform related functions. Each segment is actually a different logical address
space of the program. When a process is to be executed, its corresponding segmentation are loaded into non-contiguous
memory though every segment is loaded into a contiguous block of available memory.
Following are the important differences between Paging and Segmentation.
Sr. Paging Segmentation
No.
Page table stores the page data. Segmentation table stores the
8
segmentation data.
When do page fault occurs? Discuss the action taken by the operating system, when a page fault occurs.
3 A page fault occurs when an access to a page that has not been brought into main memory takes place. The operating
system verifies the memory access, aborting the program if it is invalid. If it is valid a free frame is located and I/O
requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are
updated and the instruction is restarted.
(iii) The values of Need for processes P0 through P4 respectively are (0, 0, 0, 0), (0, 7, 5, 0), (1, 0, 0, 2), (0, 0, 2, 0), and
(0, 6, 4, 2).
Distinguish between internal and external fragmentation. Provide any two solutions to avoid external
fragmentation.
when the method or process is larger External fragmentation happens when the
memory is divided into fixed sized memory is divided into variable size
The difference between memory The unused spaces formed between non-
Explain about the necessary conditions for deadlock. How we can prevent deadlock by using these?
When a process requests for the resource that is been held another process which needs another resource to continue, but is
been held by the first process, then it is called a deadlock.
(ii)Semaphore
Semaphores are often used to restrict the number of threads than can access some (physical or logical) resource.
Semaphores are devices used to help with synchronization. If multiple processes share a common resource, they need a
way to be able to use that resource without disrupting each other. You want each process to be able to read from and write
to that resource uninterrupted.
A semaphore will either allow or disallow access to the resource, depending on how it is set up. One example setup
would be a semaphore which allowed any number of processes to read from the resource, but only one could ever be in
the process of writing to that resource at a time.
Semaphores are commonly use for two purposes: to share a common memory space and to share access to files.
Semaphores are one of the techniques for inter process communication (IPC). The C programming language provides a
set of interfaces or “functions” for managing semaphores.
Given memory partitions of 120K, 520K, 320K, 324K and 620K (in order). How would each of the First fit, Best fit
and worst fit algorithms place processes of 227K, 432K, 127K and 441K (in order)? Which algorithm makes the
8 most efficient use of memory?
NO ANSWERS FOUND
Distinguish between internal and external fragmentation. Provide any two solutions to avoid external
9 fragmentation.
REFER QUESTION NO. 5
Consider the following snapshot of a system Available Allocation Max
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
10 P3 211 222
P4 002 433
i. What is the content of the matrix Need?
ii. Is the system in a safe state? If yes, what is the safe sequence? Show the detailed steps as per Banker’s
algorithm?
11 0 240 500
1 2150 28
2 180 60
3 1175 470
4 1482 55
What are the physical addresses for the following logical addresses?
(a) 0,280 (b) 1,20 (c) 2,150 (d) 3,320 (e) 4,188
NO ANSWERS FOUND
What do you mean by Thrashing? What are the methods to avoid Thrashing?
If the process does not have number of frames it needs to support pages in active use, it will quickly page-fault.
The high paging activity is called thrashing.
We must provide a process with as many frames as it needs. Several techniques are used.
The Working of Set Model (Strategy) It starts by looking at how many frames a process is actually using. This defines
12 the locality model.
Locality Model It states that as a process executes, it moves from locality to locality.
1 RAID level _____ is also known as block interleaved parity organisation and uses block level striping and keeps a
parity block on a seperate disk.
A. 1
B. 2
C. 3
D. 4
3 Access in which records are accessed from and inserted into file, is classified as
1. direct access
2. sequential access
3. random access
4. duplicate access
1. disc format
2. disc address
3. disc footer
4. disc header
1. volatile memory
2. non volatile memory
3. backup memory
4. impact memory
6 In fixed head discs, sum of rotational delay and transfer time is equals to
1. access time
2. delay time
3. processing time
4. storage time
7 Piece of time taken by disc to rotate and read data from right place is classified as
1. rotational delay
2. access delay
3. seek time delay
4. reversal delay
1. magnetic disc
2. floppy disc
3. program tape
4. plain disc
1. master file
2. transaction file
3. particular file
4. reference file
10 Possible dangers and threats for files are that fie can be
1. destroyed
2. modified
3. accessed
4. all of above
UNIT-4
PART – B: (Short Answer Questions)
UNIT-4
PART – C: (Long Answer Questions)
What is a file? Explain various file allocation techniques with their advantages and disadvantages.
A file is a named collection of related information that is recorded on secondary storage such as magnetic disks, magnetic
tapes and optical disks. In general, a file is a sequence of bits, bytes, lines or records whose meaning is defined by
the files creator and user.
There are mainly three methods of file allocation in the disk. Each method has its advantages and disadvantages. Mainly a
system uses one method for all files within the system.
• Contiguous allocation
• Linked allocation
• Indexed allocation
1
Contiguous allocation
In this scheme, a file is made from the contiguous set of blocks on the disk. Linear ordering on the disk is defined by the
disk addresses.
• Each file in the disk occupies a contiguous address space on the disk.
• In this scheme, the address is assigned in the linear fashion.
• The is very easy to implement the contiguous allocation method.
• In the contiguous allocation technique, external fragmentation is a major issue.
Advantages:
• In the contiguous allocation, sequential and direct access both are supported.
• For the direct access, the starting address of the kth block is given and further blocks are obtained by b+K,
• This is very fast and the number of seeks is minimal in the contiguous allocation method.
Disadvantages:
Linked allocation
The problems of contiguous allocation are solved in the linked allocation method. In this scheme, disk blocks are
arranged in the linked list form which is not contiguous.
Advantages:
Disadvantages:
1. In this scheme, there is large no of seeks because the file blocks are randomly distributed on disk.
2. Linked allocation is comparatively slower than contiguous allocation.
3. Random or direct access is not supported by this scheme we cannot access the blocks directly.
4. The pointer is extra overhead on the system due to the linked list.
Indexed Allocation
In this scheme, a special block known as the index block contains the pointer to all the blocks occupied by a file. each file
contains its index which is in the form of an array of disk block addresses.
Advantages:
Disadvantages:
1. The pointer head is relatively greater than the linked allocation of the file.
2. Indexed allocation suffers from the wasted space.
3. For the large size file, it is very difficult for single index block to hold all the pointers.
4. For very small files say files that expend only 2-3 blocks the indexed allocation would keep on the entire block
for the pointers which is insufficient in terms of memory utilization.
Suppose that the head of a moving hard disk with 200 tracks, numbered 0 to 199, is currently serving a request at
track 143 and has just finished a request at 125. The queue of request is kept in the FIFO order 86, 147, 91, 177,
94, 150, 102, 175 and 130.
What is the total number of head movements needed to satisfy these requests for the following disk-scheduling
algorithms?
1. FCFS Scheduling
2. SSTF Scheduling
2 3. SCAN Scheduling
Discuss the linked allocation and index allocation schemes for a file allocation. Compare the index allocation scheme
with the contiguous allocation scheme.
Linked allocation
The problems of contiguous allocation are solved in the linked allocation method. In this scheme, disk blocks are
arranged in the linked list form which is not contiguous. The disk block is scattered in the disk. In this scheme, the
directory entry contains the pointer of the first block and pointer of the ending block. These pointers are not for the users.
For example, a file of six blocks starts at block 10 and end at the block. Each pointer contains the address of the next
block. When we create a new file we simply create a new entry with the linked allocation. Each directory contains the
pointer to the first disk block of the file. when the pointer is nil then it defines the empty file.
Indexed Allocation
3
In this scheme, a special block known as the index block contains the pointer to all the blocks occupied by a file. each file
contains its index which is in the form of an array of disk block addresses. The ith entry of index block point to the ith
block of the file. The address of the index block is maintained by the directory. When we create a file all pointer is set to
nil. A block is obtained from the free space manager when the first ith block is written. When the index block is very
small it is difficult to hold all the pointers for the large file. to deal with this issue a mechanism is available. Mechanism
includes the following:
• Linked scheme
• Multilevel scheme
• Combined scheme
Answer the following
1. Disk Structure
2. RAID Structure.
1. The actual physical details of a modern hard disk may be quite complicated. Simply, there are one or more surfaces, each
of which contains several tracks, each of which is divided into sectors. There is one read/write head for every surface of the
disk.
Also, the same track on all surfaces is knows as a 'cylinder'. When talking about movement of the read/write head, the
cylinder is a useful concept, because all the heads (one for each surface), move in and out of the disk together.
4 2. RAID, or “Redundant Arrays of Independent Disks” is a technique which makes use of a combination of multiple disks
instead of using a single disk for increased performance, data redundancy or both. The term was coined by David Patterson,
Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987.
Indexed Allocation
In this scheme, a special block known as the index block contains the pointer to all the blocks occupied by a file. each file
contains its index which is in the form of an array of disk block addresses. The ith entry of index block point to the ith
block of the file. The address of the index block is maintained by the directory. When we create a file all pointer is set to
nil. A block is obtained from the free space manager when the first ith block is written. When the index block is very
small it is difficult to hold all the pointers for the large file. to deal with this issue a mechanism is available. Mechanism
includes the following:
• Linked scheme
• Multilevel scheme
• Combined scheme