CS8493 Operating Systems Question Bank
CS8493 Operating Systems Question Bank
REGULATIONS – 2017
CreatePipe() pipe()
Monolithic Microlithic
Kernel size is large Kernel size is small
OS is Complex to design OS is easy to Design Implement & Install
Request may be serviced faster Request may be serviced slower
Kernel Provides only IPC and low level Device
All OS services are included in the Kernel
management services
19. What are advantages of Multiprogramming? (Nov/Dec 2017, Nov/Dec 2010)
Multi Programming increases CPU Utilization by organizing jobs so that the CPU
always has one to execute.
Advantage:-
It increase CPU utilization,
It makes efficient use of the CPU overlapping the demands for the CPU & I/O
devices Increased throughput and Lower response time.
20. Define Real Time System
Real time system is one that must react to input & responds to them quickly. A real
time system has well defined, fixed time constants.
21. What does the CPU do when there are no user programs to run?(Nov/Dec 2011)
The CPU will always do processing. Even though there are no application programs running,
the operating system is still running and the CPU will still have to process.
29. Why API s need to be used rather than system calls? (April/May 2015)
System calls are much slower than APIs (library calls) since for each system call, a context
switch has to occur to load the OS (which then serves the system call).
PART-B
1. Enumerate the different operating system structure and explain with neat sketch.
(APRIL/MAY 2019, APRIL/MAY 2018, APRIL/MAY 2017, Nov/Dec 2015, NOV/DEC 2013,
APRIL/MAY 2010)
An operating system is a construct that allows the user application programs to interact with
the system hardware. Since the operating system is such a complex structure, it should be created
with utmost care so it can be used and modified easily. An easy way to do this is to create the
operating system in parts. Each of these parts should be well defined with clear inputs, outputs and
functions.
Simple Structure
There are many operating systems that have a rather simple structure. These started as small systems
and rapidly expanded much further than their scope. A common example of this is MS-DOS. It was
designed simply for a niche amount for people. There was no indication that it would become so
popular.
An image to illustrate the structure of MS-DOS is as follows:
It is better that operating systems have a modular structure, unlike MS-DOS. That would lead to
greater control over the computer system and its various applications. The modular structure would
also allow the programmers to hide information as required and implement internal routines as they
see fit without changing the outer specifications.
Layered Structure
One way to achieve modularity in the operating system is the layered approach. In this, the bottom
layer is the hardware and the topmost layer is the user interface.
An image demonstrating the layered approach is as follows:
As seen from the image, each upper layer is built on the bottom layer. All the layers hide some
structures, operations etc from their upper layers.
One problem with the layered structure is that each layer needs to be carefully defined. This is
necessary because the upper layers can only use the functionalities of the layers below them.
CreateFile() open()
ReadFile() read()
File Management
WriteFile() write()
CloseHandle() close()
SetConsoleMode() ioctl()
Device Management ReadConsole() read()
WriteConsole() write()
GetCurrentProcessID() getpid()
Information Maintenance SetTimer() alarm()
Sleep() sleep()
CreatePipe() pipe()
Communication CreateFileMapping() shmget()
MapViewOfFile() mmap()
There are many different system calls as shown above. Details of some of those system calls are as
follows:
wait()
In some systems, a process may wait for another process to complete its execution. This happens
when a parent process creates a child process and the execution of the parent process is suspended
until the child process executes. The suspending of the parent process occurs with a wait() system
call. When the child process completes execution, the control is returned back to the parent process.
exec()
This system call runs an executable file in the context of an already running process. It replaces the
previous executable file. This is known as an overlay. The original process identifier remains since a
new process is not created but data, heap, stack etc. of the process are replaced by the new process.
fork()
Processes use the fork() system call to create processes that are a copy of themselves. This is one of
the major methods of process creation in operating systems. When a parent process creates a child
process and the execution of the parent process is suspended until the child process executes. When
the child process completes execution, the control is returned back to the parent process.
exit()
The exit() system call is used by a program to terminate its execution. In a multithreaded
environment, this means that the thread execution is complete. The operating system reclaims
resources that were used by the process after the exit() system call.
kill()
The kill() system call is used by the operating system to send a termination signal to a process that
urges the process to exit. However, kill system call does not necessary mean killing the process and
can have various meanings.
System Program:
System programs provide an environment where programs can be developed and executed.
In the simplest sense, system programs also provide a bridge between the user interface and system
calls. In reality, they are much more complex. For example, a compiler is a complex system
program.
System Programs Purpose
The system program serves as a part of the operating system. It traditionally lies between the user
interface and the system calls. The user view of the system is actually defined by system programs
and not system calls because that is what they interact with and system programs are closer to the
user interface.
An image that describes system programs in the operating system hierarchy is as follows:
In the above image, system programs as well as application programs form a bridge between the
user interface and the system calls. So, from the user view the operating system observed is actually
the system programs and not the system calls.
Types of System Programs
System programs can be divided into seven parts. These are given as follows:
Status Information
The status information system programs provide required data on the current or past status of the
system. This may include the system date, system time, available memory in system, disk space,
logged in users etc.
Communications
These system programs are needed for system communications such as web browsers. Web
browsers allow systems to communicate and access information from the network as required.
File Manipulation
These system programs are used to manipulate system files. This can be done using various
commands like create, delete, copy, rename, print etc. These commands can create files, delete files,
copy the contents of one file into another, rename files, print them etc.
Program Loading and Execution
The system programs that deal with program loading and execution make sure that programs can be
loaded into memory and executed correctly. Loaders and Linkers are a prime example of this type of
system programs.
File Modification
System programs that are used for file modification basically change the data in the file or modify it
in some other way. Text editors are a big example of file modification system programs.
Application Programs
Application programs can perform a wide range of services as per the needs of the users. These
include programs for database systems, word processors, plotting tools, spreadsheets, games,
scientific applications etc.
Programming Language Support
These system programs provide additional support features for different programming languages.
Some examples of these are compilers, debuggers etc. These compile a program and make sure it is
error free respectively.
OS generation:
Operating Systems have evolved over the years. So, their evolution through the years can be
mapped using generations of operating systems. There are four generations of operating systems.
These can be described as follows:
4. Write short notes on operating system services and components.(NOV/DEC 2019, MAY/JUNE
2012)
An Operating System provides services to both the users and to the programs.
Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs like
printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a process.
A process includes the complete execution context (code to execute, data to manipulate, registers,
OS resources in use). Following are the major activities of an operating system with respect to
program management −
Loads a program into memory.
Executes the program.
Handles program's execution.
Provides a mechanism for process synchronization.
Provides a mechanism for process communication.
Provides a mechanism for deadlock handling.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software. Drivers hide
the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.
I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
A file represents a collection of related information. Computers can store files on the disk
(secondary storage), for long-term storage purpose. Examples of storage media include magnetic
tape, magnetic disk and optical disk drives like CD, DVD. Each of these media has its own
properties like speed, capacity, data transfer rate and data access methods.
A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an operating
system with respect to file management −
Communication
In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock, the operating system manages communications between all the
processes. Multiple processes communicate with one another through communication lines in the
network.
The OS handles routing and connection strategies, and the problems of contention and security.
Following are the major activities of an operating system with respect to communication −
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect to error
handling −
Resource Management
In case of multi-user or multi-tasking environment, resources such as main memory, CPU cycles
and files storage are to be allocated to each user or job. Following are the major activities of an
operating system with respect to resource management −
Protection
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or users to
the resources defined by a computer system. Following are the major activities of an operating
system with respect to protection −
5. Discuss about the functionality of system boot with respect to operating system. (APR/MAY
2015)
The BIOS, operating system and hardware components of a computer system should all be
working correctly for it to boot. If any of these elements fail, it leads to a failed boot sequence.
System Boot Process
The following diagram demonstrates the steps involved in a system boot process:
Here are the steps:
The CPU initializes itself after the power in the computer is first turned on. This is done by
triggering a series of clock ticks that are generated by the system clock.
After this, the CPU looks for the system’s ROM BIOS to obtain the first instruction in the
start-up program. This first instruction is stored in the ROM BIOS and it instructs the system
to run POST (Power on Self Test) in a memory address that is predetermined.
POST first checks the BIOS chip and then the CMOS RAM. If there is no battery failure
detected by POST, then it continues to initialize the CPU.
POST also checks the hardware devices, secondary storage devices such as hard drives,
ports etc. And other hardware devices such as the mouse and keyboard. This is done to make
sure they are working properly.
After POST makes sure that all the components are working properly, then the BIOS finds
an operating system to load.
In most computer systems, the operating system loads from the C drive onto the hard drive.
The CMOS chip typically tells the BIOS where the operating system is found.
The order of the different drives that CMOS looks at while finding the operating system is
known as the boot sequence. This sequence can be changed by changing the CMOS setup.
After finding the appropriate boot drive, the BIOS first finds the boot record which tells it to
find the beginning of the operating system.
After the initialization of the operating system, the BIOS copies the files into the memory.
Then the operating system controls the boot process.
In the end, the operating system does a final inventory of the system memory and loads the
device drivers needed to control the peripheral devices.
The users can access the system applications to perform various tasks.
Without the system boot process, the computer users would have to download all the software
components, including the ones not frequently required. With the system boot, only those software
components need to be downloaded that are legitimately required and all extraneous components are
not required. This process frees up a lot of space in the memory and consequently saves a lot of
time.
6. Sketch the structure of Direct Memory Access in detail. (APR/MAY 2017, APR/MAY 2015)
For the execution of a computer program, it requires the synchronous working of more than one
component of a computer. For example, Processors – providing necessary control information,
addresses…etc, buses – to transfer information and data to and from memory to I/O devices…etc.
The interesting factor of the system would be the way it handles the transfer of information among
processor, memory and I/O devices. Usually, processors control all the process of transferring data,
right from initiating the transfer to the storage of data at the destination. This adds load on the
processor and most of the time it stays in the ideal state, thus decreasing the efficiency of the
system. To speed up the transfer of data between I/O devices and memory, DMA controller acts as
station master. DMA controller transfers data with minimal intervention of the processor.
DMA Controller:
The term DMA stands for direct memory access. The hardware device used for direct memory
access is called the DMA controller. DMA controller is a control unit, part of I/O device’s interface
circuit, which can transfer blocks of data between I/O devices and main memory with minimal
intervention from the processor.
DMA Controller Diagram in Computer Architecture
DMA controller provides an interface between the bus and the input-output devices. Although it
transfers data without intervention of processor, it is controlled by the processor. The processor
initiates the DMA controller by sending the starting address, Number of words in the data block and
direction of transfer of data .i.e. from I/O devices to the memory or from main memory to I/O
devices. More than one external device can be connected to the DMA controller.
DMA controller contains an address unit, for generating addresses and selecting I/O device
for transfer. It also contains the control unit and data count for keeping counts of the number
of blocks transferred and indicating the direction of transfer of data. When the transfer is
completed, DMA informs the processor by raising an interrupt. The typical block diagram of
the DMA controller is shown in the figure below.
Working of DMA Controller
DMA controller has to share the bus with the processor to make the data transfer. The device that
holds the bus at a given time is called bus master. When a transfer from I/O device to the memory or
vice verse has to be made, the processor stops the execution of the current program, increments the
program counter, moves data over stack then sends a DMA select signal to DMA controller over the
address bus.
If the DMA controller is free, it requests the control of bus from the processor by raising the bus
request signal. Processor grants the bus to the controller by raising the bus grant signal, now DMA
controller is the bus master. The processor initiates the DMA controller by sending the memory
addresses, number of blocks of data to be transferred and direction of data transfer. After assigning
the data transfer task to the DMA controller, instead of waiting ideally till completion of data
transfer, the processor resumes the execution of the program after retrieving instructions from the
stack.
7. Describe the differences between symmetric and asymmetric multiprocessing. What are three
advantages andone disadvantage of multiprocessor systems? (MAY/JUNE 2016).
Symmetric Multiprocessing system: in this case each processor runs an identical copy of the OS,
and hence they can communicate with each other as needed. Example: all modern OS (windows
NT, UNIX, LINUX, windows 7,10).
1. Increased throughput
2. economy of scale
3. reliability more
Disadvantages:
Basic Each processor run the tasks in Only Master processor run the tasks
each processor.
memory. processor.
BASIS FOR SYMMETRIC ASYMMETRIC
other processors.
need to be synchronized to
PART – A
1. Define Process?
A Process can be thought of as a program in execution. A process will need certain
resources such as CPU time, memory, files & I/O devices to accomplish its task.
2. Draw & briefly explain the process states? (Nov/Dec 2017)
PART-B
1. Consider the following set of processes, with the length of the CPU – burst time in
given ms:
Process Burst time (B.T) Arrival time(A.T)
P1 8 0.00
P2 4 1.001
P3 9 2.001
P4 5 3.001
P5 3 4.001
Draw four Gantt charts illustrating the execution of these processes using FCFS, SJF, Priority
and RR (quantum=2) scheduling. Also calculate waiting time and turnaround time for each
scheduling algorithms. (APRIL/MAY 2017)
Solution:
FCFS
P1 P2 P3 P4 P5
0 8 12 21 26 29
T.A-Turnaround Time
A.T-Arrival Time
W.T-Waiting Time
SJF
Gantt Chart
P1 P5 P2 P4 P1 P3
0 1 4 8 13 20 29
P1 0.00 8 20 20 12
P5 4.001 3 4 0 -3
Where,
T.A.T=C.T-A.T
W.T=T.A.T-B.T
T.A.T-Turnaround Time
A.T-Arrival Time
B.T-Burst Time
PRIORITY:
P1 P2 P3 P4 P5
0 8 12 21 26 29
T.A.T=C.T-
A.TW.T=T.A.T
-B.T
ROUND ROBIN
Quantum -2
P1 P2 P3 P4 P5 P1 P2 P3 P4 P5 P1 P3 P4 P1 P3 P3
0 2 4 6 8 10 12 14 16 18 19 21 23 24 26 28 29
P1 8 26 26 18
P2 4 14 12.99 8.99
P3 9 29 26.99 17.99
P4 5 24 20.99 15.99
P5 3 19 14.99 11.99
Where,
2. What is a race condition? Explain how a critical section avoids this condition. What
are the properties which a data item should possess to implement a critical section?
Describe a solution to the Dining philosopher problem so that to races arise.
(NOV/DEC 2019, APRIL/MAY 2017)
Race Conditions
A race condition is an undesirable situation that occurs when a device or system attempts to
perform two or more operations at the same time, but because of the nature of the device or system,
the operations must be done in the proper sequence to be done correctly.
Solution:
From the problem statement, it is clear that a philosopher can think for an indefinite amount
of time. But when a philosopher starts eating, he has to stop at some point of time. The philosopher
is in an endless cycle of thinking and eating.
An array of five semaphores, stick[5], for each of the five chopsticks.
The code for each philosopher looks like:
while(TRUE) {
wait(stick[i]);
wait(stick[(i+1) % 5]); // mod is used because if i=5, next
// chopstick is 1 (dining table is circular)
/* eat */
signal(stick[i]);
signal(stick[(i+1) % 5]);
/* think */
}
When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks
up that chopstick. Then he waits for the right chopstick to be available, and then picks it too. After
eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them pickup one chopstick, then
a deadlock situation occurs because they will be waiting for another chopstick forever. The possible
solutions for this are:
A philosopher must be allowed to pick up the chopsticks only if both the left and right
chopsticks are available.
Allow only four philosophers to sit at the table. That way, if all the four philosophers pick up four
chopsticks, there will be one chopstick left on the table. So, one philosopher can start eating and
eventually, two chopsticks will be available. In this way, deadlocks can be avoided.
a) FCFS
Gantt Chart
P1 P2 P3
0 8 12 13
T.A.T=C.T-A.T
A.T-Arrival Time
B.T-Burst Time
Average Turnaround Time=(8+11.6+12) / 3=31.6 / 3
=10.53
b) SJF non-preemptive
Gantt Chart
P1 P2 P3
0 8 12 13
T.A.T=C.T-A.T
T.A.T-Turnaround Time
A.T-Arrival Time
B.T-Burst Time
Average Turnaround Time=(8+11.6+12) / 3=31.6 / 3=10.53
a) Preemptive SJF
Gantt Chart
P1 P2 P3 P1 P2
0 0.4 1 2 9.6 12.6
T.A.T=C.T-A.T
T.A.T-Turnaround Time
A.T-Arrival Time
B.T-Burst Time
Average Turnaround Time=(9.6+12.2+1) / 3=22.8 / 3
3. Explain the FCFS, preemptive and non-preemptive versions of Shortest-Job First and
Round Robin (time slice = 2) scheduling algorithms with Gantt charts for the four Processes
given. Compare their average turnaround and waiting time. (NOV/DEC 2012)
Refer Notes(unit-2)
Scheduling Algorithms:
The following subsections will explain several common scheduling strategies, looking at only a
single CPU burst each for a small number of processes. Obviously real systems have to deal with a
lot more simultaneous processes executing their CPU-I/O burst cycles.
FCFS is very simple - Just a FIFO queue, like customers waiting in line at the bank or the
post office or at a copying machine.
Unfortunately, however, FCFS can yield some very long average wait times, particularly if
the first process to get there takes a long time. For example, consider the following three
processes:
P1 24
P2 3
P3 3
In the first Gantt chart below, process P1 arrives first. The average waiting time for the three
processes is ( 0 + 24 + 27 ) / 3 = 17.0 ms.
In the second Gantt chart below, the same three processes have an average wait time of ( 0 +
3 + 6 ) / 3 = 3.0 ms. The total run time for the three bursts is the same, but in the second case
two of the three finish much quicker, and the other process is only delayed by a short
amount.
FCFS can also block the system in a busy dynamic system in another way, known as
the convoy effect. When one CPU intensive process blocks the CPU, a number of I/O
intensive processes can get backed up behind it, leaving the I/O devices idle. When the CPU
hog finally relinquishes the CPU, then the I/O processes pass through the CPU quickly,
leaving the CPU idle while everyone queues up for I/O, and then the cycle repeats itself
when the CPU intensive process gets back to the ready queue.
The idea behind the SJF algorithm is to pick the quickest fastest little job that needs to be
done, get it out of the way first, and then pick the next smallest fastest job to do next.
( Technically this algorithm picks a process based on the next shortest CPU burst, not the
overall process time. )
For example, the Gantt chart below is based upon the following CPU burst times, ( and the
assumption that all jobs arrive at the same time. )
P1 6
P2 8
P3 7
P4 3
In the case above the average wait time is ( 0 + 3 + 9 + 16 ) / 4 = 7.0 ms, ( as opposed to
10.25 ms for FCFS for the same processes. )
SJF can be proven to be the fastest scheduling algorithm, but it suffers from one important
problem: How do you know how long the next CPU burst is going to be?
o For long-term batch jobs this can be done based upon the limits that users set for
their jobs when they submit them, which encourages them to set low limits, but risks
their having to re-submit the job if they set the limit too low. However that does not
work for short-term CPU scheduling on an interactive system.
o Another option would be to statistically measure the run time characteristics of jobs,
particularly if the same tasks are run repeatedly and predictably. But once again that
really isn't a viable option for short term CPU scheduling in the real world.
o A more practical approach is to predict the length of the next burst, based on some
historical measurement of recent burst times for this process. One simple, fast, and
relatively accurate method is the exponential average, which can be defined as
follows. ( The book uses tau and t for their variables, but those are hard to distinguish
from one another and don't work well in HTML. )
o In this scheme the previous estimate contains the history of all previous times, and
alpha serves as a weighting factor for the relative importance of recent data versus
past history. If alpha is 1.0, then past history is ignored, and we assume the next burst
will be the same length as the last burst. If alpha is 0.0, then all measured burst times
are ignored, and we just assume a constant burst time. Most commonly alpha is set at
0.5, as illustrated in Figure a.
Figure a
SJF can be either preemptive or non-preemptive. Preemption occurs when a new process
arrives in the ready queue that has a predicted burst time shorter than the time remaining in
the process whose burst is currently on the CPU. Preemptive SJF is sometimes referred to
as shortest remaining time first scheduling.
For example, the following Gantt chart is based upon the following data:
P1 0 8
P2 1 4
P3 2 9
p4 3 5
The average wait time in this case is ( ( 5 - 3 ) + ( 10 - 1 ) + ( 17 - 2 ) ) / 4 = 26 / 4 = 6.5 ms. (
As opposed to 7.75 ms for non-preemptive SJF or 8.75 for FCFS. )
Priority Scheduling:
Priority scheduling is a more general case of SJF, in which each job is assigned a priority
and the job with the highest priority gets scheduled first. ( SJF uses the inverse of the next
expected burst time as its priority - The smaller the expected burst, the higher the priority. )
Note that in practice, priorities are implemented using integers within a fixed range, but
there is no agreed-upon convention as to whether "high" priorities use large numbers or
small numbers. This book uses low number for high priorities, with 0 being the highest
possible priority.
For example, the following Gantt chart is based upon these process burst times and
priorities, and yields an average waiting time of 8.2 ms:
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Priorities can be assigned either internally or externally. Internal priorities are assigned by
the OS using criteria such as average burst time, ratio of CPU to I/O activity, system
resource use, and other factors available to the kernel. External priorities are assigned by
users, based on the importance of the job, fees paid, politics, etc.
Priority scheduling can be either preemptive or non-preemptive.
Priority scheduling can suffer from a major problem known as indefinite blocking,
or starvation, in which a low-priority task can wait forever because there are always some
other jobs around that have higher priority.
o If this problem is allowed to occur, then processes will either run eventually when
the system load lightens ( at say 2:00 a.m. ), or will eventually get lost when the
system is shut down or crashes. ( There are rumors of jobs that have been stuck for
years. )
o One common solution to this problem is aging, in which priorities of jobs increase
the longer they wait. Under this scheme a low-priority job will eventually get its
priority raised high enough that it gets run.
Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are assigned
with limits called time quantum.
When a process is given the CPU, a timer is set for whatever value has been set for a time
quantum.
o If the process finishes its burst before the time quantum timer expires, then it is
swapped out of the CPU just like the normal FCFS algorithm.
o If the timer goes off first, then the process is swapped out of the CPU and moved to
the back end of the ready queue.
The ready queue is maintained as a circular queue, so when all processes have had a turn,
then the scheduler gives the first process another turn, and so on.
RR scheduling can give the effect of all processors sharing the CPU equally, although the
average wait time can be longer than with other scheduling algorithms. In the following
example the average wait time is 5.66 ms.
P1 24
P2 3
P3 3
The performance of RR is sensitive to the time quantum selected. If the quantum is large
enough, then RR reduces to the FCFS algorithm; If it is very small, then each process gets
1/nth of the processor time and share the CPU equally.
BUT, a real system invokes overhead for every context switch, and the smaller the time
quantum the more context switches there are. Most modern systems use time quantum
between 10 and 100 milliseconds, and context switch times on the order of 10 microseconds,
so the overhead is small relative to the time quantum.
Turnaround time also varies with quantum time, in a non-apparent manner.
In general, turnaround time is minimized if most processes finish their next cpu burst within
one time quantum. For example, with three processes of 10 ms bursts each, the average
turnaround time for 1 ms quantum is 29, and for 10 ms quantum it reduces to 20. However,
if it is made too large, then RR just degenerates to FCFS. A rule of thumb is that 80% of
CPU bursts should be smaller than the time quantum.
When processes can be readily categorized, then multiple separate queues can be
established, each implementing whatever scheduling algorithm is most appropriate for that
type of job, and/or with different parametric adjustments.
Scheduling must also be done between queues, that is scheduling one queue to get time
relative to other queues. Two common options are strict priority ( no job in a lower priority
queue runs until all higher priority queues are empty ) and round-robin ( each queue gets a
time slice in turn, possibly of different sizes. )
Note that under this algorithm jobs cannot switch from queue to queue - Once they are
assigned a queue, that is their queue until they finish.
Multilevel Feedback-Queue Scheduling
Multilevel feedback queue scheduling is similar to the ordinary multilevel queue scheduling
described above, except jobs may be moved from one queue to another for a variety of
reasons:
o If the characteristics of a job change between CPU-intensive and I/O intensive, then
it may be appropriate to switch a job from one queue to another.
o Aging can also be incorporated, so that a job that has waited for a long time can get
bumped up into a higher priority queue for a while.
Multilevel feedback queue scheduling is the most flexible, because it can be tuned for any
situation. But it is also the most complex to implement because of all the adjustable
parameters. Some of the parameters which define one of these systems include:
o The number of queues.
o The scheduling algorithm for each queue.
o The methods used to upgrade or demote processes from one queue to another. (
Which may be different. )
o The method used to determine which queue a process enters initially.
5. What is critical section? Specify the requirements for a solution to critical section
problem. (MAY/JUNE 2019, NOV/DEC 2012)
The Critical-Section Problem:
There are n processes that are competing to use some shared data
Each process has a code segment, called critical section, in which the shared data is
accessed.
Problem – ensure that when one process is executing in its critical section, no other process
is allowed to execute in its critical section.
Requirements to be satisfied for a Solution to the Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections.
2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter
their critical section, then the selection of the processes that will enter the critical section next cannot be
postponed indefinitely.
3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to
enter their critical sections after a process has made a request to enter its critical section and before
that request is granted.
do {
exit section
remainder section
} while (true);
Two general approaches are used to handle critical sections in operating systems:
preemptive kernels and nonpreemptive kernels.
A preemptive kernel allows a process to be preempted while it is running in kernel mode.
A non-preemptive kernel does not allow a process running in kernel mode to be
preempted; a kernel-mode process will run until it exits kernel mode, blocks, or
voluntarily yields control of the CPU.
{
initialization code
To allow a process to wait within the monitor, a condition variable must be declared as o
condition x, y;
8. Consider the following system snapshot using data structures in the banker’s
algorithm, with resources A, B, C and D and process P0 to P4.
Max Allocation Need Available
A B C D A B C D A B C D A B C D
P0 6 0 1 2 4 0 0 1 3211
P1 1 7 5 0 1 1 0 0
P2 2 3 5 6 1 2 5 4
P3 1 6 5 3 0 6 3 3
P4 1 6 5 6 0 2 1 2
Solution:
If a request from process P4 arrives for additional resources of (1,2,0,0,), and if this request is
granted, the new system state would be tabulated as follows.
• Simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
• The deadlock-avoidance algorithm dynamically examines the resource-allocation
state to ensure that there can never be a circular-wait condition.
• Resource-allocation state is defined by the number of available and allocated
resources, and the maximum demands of the processes.
Safe State
• When a process requests an available resource, system must decide if immediate allocation
leaves the system in a safe state.
• System is in safe state if there exists a sequence <P1, P2, …, Pn> of ALL the
processes is the systems such that for each Pi, the resources that Pi can still request can be
satisfied by currently available resources + resources held by all the Pj, with j < i.
• That is:
– If Pi resource needs are not immediately available, then Pi can wait until all Pj have
finished.
– When Pj is finished, Pi can obtain needed resources, execute, return allocated
resources, and terminate.
Avoidance algorithms
• Single instance of a resource type. Use a resource-allocation graph
Banker’s Algorithm
• Multiple instances.
• Each process must a priori claim maximum use.
• When a process requests a resource it may have to wait.
• When a process gets all its resources it must return them in a finite amount of time.
• Let n = number of processes, and m = number of resources types.
• Available: Vector of length m. If available [j] = k, there are k instances of resource
type Rj available.
• Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of
resource type Rj.
• Allocation: n x m matrix. If Allocation[i,j] = k then Pi is currently allocated k instances of
Rj.
• Need: n x m matrix. If Need[i,j] = k, then Pi may need k more instances of Rj to complete its
task.
P1 200 322
P2 302 902
P3 211 222
P4 002 433
• The content of the matrix Need is defined to be Max – Allocation.
Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431
• The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria.
d [i,j] = Max[i,j] – Allocation [i,j].
When deadlock detected, then our system stops working, and after the recovery of the deadlock, our
system start working again.
Therefore, after the detection of deadlock, a method/way must require to recover that deadlock to
run the system again. The method/way is called as deadlock recovery.
Here are various ways of deadlock recovery that we will discuss briefly in this tutorial.
Let's discuss about all the above three ways of deadlock recovery one by one.
The ability to take a resource away from a process, have another process use it, and then give it back
without the process noticing. It is highly dependent on the nature of the resource.
In this case of deadlock recovery through rollback, whenever a deadlock is detected, it is easy to see
which resources are needed.
To do the recovery of deadlock, a process that owns a needed resource is rolled back to a point in
time before it acquired some other resource just by starting one of its earlier checkpoints.
Deadlock Recovery through Killing Processes
This method of deadlock recovery through killing processes is the simplest way of deadlock
recovery.
Sometime it is best to kill a process that can be return from the beginning with no ill effects.
11. Consider the following set of processes, with the length of the CPU – burst time given
in Milliseconds:
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2
The processes are arrived in the order P1, P2, P3, P4, P5, all at time 0.
• Draw 4 Gantt charts illustrating the execution of these processes
using FCFS, SJF Priority and RR (Time Slice = 1) scheduling
• What is the turnaround time of each process for each of the scheduling?
• Calculate the waiting time for each of the process (APRIL/MAY 2018,
MAY/JUNE 2012, NOV/DEC 2015)
13. Discuss in detail the critical section problem and also write the algorithm for Readers-
Writers Problem with semaphores (NOV/DEC 2013)
There are n processes that are competing to use some shared data
Each process has a code segment, called critical section, in which the shared data is
accessed.
Problem – ensure that when one process is executing in its critical section, no other process
is allowed to execute in its critical section.
READERS-WRITERS PROBLEM:
The readers-writers problem relates to an object such as a file that is shared between multiple
processes. Some of these processes are readers i.e. they only want to read the data from the object
and some of the processes are writers i.e. they want to write into the object.
The readers-writers problem is used to manage synchronization so that there are no problems with
the object data. For example - If two readers access the object at the same time there is no problem.
However if two writers or a reader and writer access the object at the same time, there may be
problems.
To solve this situation, a writer should get exclusive access to an object i.e. when a writer is
accessing the object, no reader or writer may access it. However, multiple readers can access the
object at the same time.
This can be implemented using semaphores. The codes for the reader and writer process in the
reader-writer problem are given as follows:
Reader Process
The code that defines the reader process is given below:
wait (mutex);
rc ++;
if (rc == 1)
wait (wrt);
signal(mutex);
wait(mutex);
rc --;
if (rc == 0)
signal (wrt);
signal(mutex);
In the above code, mutex and wrt are semaphores that are initialized to 1. Also, rc is a variable that
is initialized to 0. The mutex semaphore ensures mutual exclusion and wrt handles the writing
mechanism and is common to the reader and writer process code.
The variable rc denotes the number of readers accessing the object. As soon as rc becomes 1, wait
operation is used on wrt. This means that a writer cannot access the object anymore. After the read
operation is done, rc is decremented. When re becomes 0, signal operation is used on wrt. So a
writer can access the object now.
Writer Process
The code that defines the writer process is given below:
wait(wrt);
signal(wrt);
If a writer wants to access the object, wait operation is performed on wrt. After that no other writer
can access the object. When a writer is done writing into the object, signal operation is performed
on wrt.
Deadlock Detection
• Allow system to enter deadlock state
• Detection algorithm
• Recovery scheme
Single Instance of Each Resource Type
• Maintain wait-for graph
Nodes are processes.
– Pi o Pj if Pi is waiting for Pj.
• Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle,
there exists a deadlock.
• An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is
the number of vertices in the graph.
15. Show how wait() and signal() semaphore operations could be implemented in
multiprocessor environments using the test and set instruction. The solution should
exhibit minimal busy waiting. Develop pseudo code for implementing the operations.
(APR/MAY 2015)
Semophores
}
signal (s)
{
s++;
}
remainder
section } while
(1);
signal(mutex);
Semaphore Implementation
The semaphore discussed so far requires a busy waiting. That is if a process is in critical-
section, the other process that tries to enter its critical-section must loop continuously in
the entry code.
To overcome the busy waiting problem, the definition of the semaphore operations wait
and signal should be modified.
o x When a process executes the wait operation and finds that the semaphore value
is not positive, the process can block itself. The block operation places the process
into a waiting queue associated with the semaphore.
o A process that is blocked waiting on a semaphore should be restarted when some
other process executes a signal operation. The blocked process should be
restarted by a wakeup operation which put that process into ready queue.
x To implemented the semaphore, we define a semaphore as a record as:
typedef struct {
int value;
struct process *L;
} semaphore;
P0 P1
Suppose that P0 executes wait(S), then P1 executes wait(Q). When P0 executes
wait (Q), it must wait until P1 executes signal(Q).Similarly when P1 executes
wait(S), it must wait until P0 executes signal(S). Since these signal operations
cannot be executed, P0 & P1 are deadlocked.
Another problem related to deadlock is indefinite blocking or starvation, a
situation where a process wait indefinitely within the semaphore. Indefinite
blocking may occur if we add or remove processes from the list associated with
a semaphore in LIFO order.
Types of Semaphores
Multithreading Models
Multithreading allows the execution of multiple parts of a program at the same time. These parts are
known as threads and are lightweight processes available within the process. Therefore,
multithreading leads to maximum utilization of the CPU by multitasking.
The main models for multithreading are one to one model, many to one model and many to many
model. Details about these are given as follows:
One to One Model
The one to one model maps each of the user threads to a kernel thread. This means that many
threads can run in parallel on multiprocessors and other threads can run when one thread makes a
blocking system call.
A disadvantage of the one to one model is that the creation of a user thread requires a corresponding
kernel thread. Since a lot of kernel threads burden the system, there is restriction on the number of
threads in the system.
A diagram that demonstrates the one to one model is given as follows:
Internal fragmentation is the area occupied by a process but cannot be used by the
process. This space is unusable by the system until the process release the space.
External fragmentation exists when total free memory is enough for the new
process but it's not contiguous and can't satisfy the request. Storage is fragmented
into small holes.
1 20 20 20
2 18 18 15
3 15 16 11
4 10 14 8
5 8 10 7
6 7 10 7
7 7 7 7
31. Differentiate between Global and Local page replacement algorithms. (NOV/DEC 2012)(May/June
2013, 15)
PART-B
1. Describe the hierarchical paging technique for structuring page tables. (8)
(MAY/JUNE 2013)
Multilevel Paging is a paging scheme which consist of two or more levels of page tables in
a hierarchical manner. It is also known as hierarchical paging. The entries of the level 1 page
table are pointers to a level 2 page table and entries of the level 2 page tables are pointers to a
level 3 page table and so on. The entries of the last level page table are stores actual frame
information. Level 1 contain single page table and address of that table is stored in PTBR
(Page Table Base Register).
Virtual address:
In multilevel paging whatever may be levels of paging all the page tables will be stored in
main memory. So it requires more than one memory access to get the physical address of page
frame. One access for each level needed. Each page table entry except the last level page table entry
contains base address of the next level page table.
If page table size > desired size then create 1 more level.
Disadvantage:
Extra memory references to access address translation tables can slow programs down by a
factor of two or more. Use translation look aside buffer (TLB) to speed up address
translation by storing page table entries.
2. What is the cause for thrashing? How does the system detect thrashing? Once it
detects, what can the system do to eliminate this problem? (NOV/DEC 2017,
MAY/JUNE 2009)
Segmentation:
Segmentation is a memory management technique in which each job is divided into several
segments of different sizes, one for each module that contains pieces that perform related functions. Each
segment is actually a different logical address space of the program.
When a process is to be executed, its corresponding segmentation are loaded into non-contiguous memory
though every segment is loaded into a contiguous block of available memory.
Segmentation memory management works very similar to paging but here segments are of variable-length
where as in paging pages are of fixed size.
A program segment contains the program's main function, utility functions, data structures, and so on. The
operating system maintains a segment map table for every process and a list of free memory blocks
along with segment numbers, their size and corresponding memory locations in main memory. For each
segment, the table stores the starting address of the segment and the length of the segment. A reference to
a memory location includes a value that identifies a segment and an offset.
Following steps are followed to translate logical address into physical address-
Step-01:
CPU generates a logical address consisting of three parts-
1. Segment Number
2. Page Number
3. Page Offset
Segment Number specifies the specific segment from which CPU wants to reads the data.
Page Number specifies the specific page of that segment from which CPU wants to read the data.
Page Offset specifies the specific word on that page that CPU wants to read.
Step-02:
For the generated segment number, corresponding entry is located in the segment table.
Segment table provides the frame number of the frame storing the page table of the referred
segment.
The frame containing the page table is located.
Step-03:
For the generated page number, corresponding entry is located in the page table.
Page table provides the frame number of the frame storing the required page of the referred
segment.
The frame containing the required page is located.
Step-04:
The frame number combined with the page offset forms the required physical address.
For the generated page offset, corresponding word is located in the page and read.
The following diagram illustrates the above steps of translating logical address into physical
address-
Advantages-
The advantages of segmented paging are-
Segment table contains only one entry corresponding to each segment.
It reduces memory usage.
The size of Page Table is limited by the segment size.
It solves the problem of external fragmentation.
Disadvantages-
The disadvantages of segmented paging are-
Segmented paging suffers from internal fragmentation.
The complexity level is much higher as compared to paging.
5. Explain the segmentation with paging implemented in OS/2 32-bit IBM system. Describe the
following algorithms: (APRIL/MAY2010)(April/May2019)
a. First fit
b. Best Fit
c. Worst Fit
s g p
13 1 2
Where s designates the segment number, g indicates whether the segment is in the GDT
or LDT, and p deals with protection.
The base and limit information about the segment in question are used to generate a
linear-address.
First, the limit is used to check for address validity. If the address is not valid, a
memory fault is generated, resulting in a trap to the operating system. If it is valid, then
the value of the offset is added to the value of the base, resulting in a 32-bit linear
address. This address is then translated into a physical address.
The linear address is divided into a page number consisting of 20 bits, and a page offset
consisting of 12 bits. Since we page the page table, the page number is further divided
into a 10-bit page directory pointer and a 10-bit page table pointer. The logical address
is as follows.
P1 P2 D
10 10 12
To improve the efficiency of physical memory use. Intel 386 page tables can be
swapped to disk. In this case, an invalid bit is used in the page directory entry to
indicate whether the table to which the entry is pointing is in memory or on disk.
If the table is on disk, the operating system can use the other 31 bits to specify the
disk location of the table; the table then can be brought into memory on demand.
Fragmentation
External Fragmentation – This takes place when enough total memory space exists to satisfy a
request, but it is not contiguous i.e, storage is fragmented into a large number of small holes
scattered throughout the main memory.
Internal Fragmentation – Allocated memory may be slightly larger than requested memory.
6. Explain how paging supports virtual memory. With a neat diagram explain how logical
address is translated into physical address. (NOV/DEC 2012)
Virtual Memory:
It is a technique that allows the execution of processes that may not be
completely in main memory.
In practice, most real processes do not need all their pages, or at least not all at
once, for several reasons:
1. Error handling code is not needed unless that specific error occurs, some of
which are quite rare.
2. Arrays are often over-sized for worst-case scenarios, and only a small
fraction of the arrays are actually used in practice.
3. Certain features of certain programs are rarely used.
Advantages:
o Allows the program that can be larger than the physical memory.
o Separation of user logical memory from physical memory
o Allows processes to easily share files & address space.
o Allows for more efficient process creation.
Virtual memory can be implemented using
o Demand paging
o Demand segmentation
Demand Paging:
A demand paging system is quite similar to a paging system with swapping. When we
want to execute a process, we swap it into memory. Rather than swapping the entire
process into memory, however, we use a lazy swapper called pager. Lazy Swapper -Never
swaps a page into memory unless that page will be needed.
When a process is to be swapped in, the pager guesses which pages will be used before
the process is swapped out again. Instead of swapping in a whole process, the pager brings
only those necessary pages into memory. Thus, it avoids reading into memory pages that
will not be used in anyway, decreasing the swap time and the amount of physical memory
needed.
Transfer of a paged memory to contiguous disk space
Hardware support is required to distinguish between those pages that are in memory and
those pages that are on the disk using the valid-invalid bit scheme. Where valid and invalid
pages can be checked by checking the bit. Marking a page will have no effect if the
process never attempts to access the page. While the process executes and accesses pages
that are memory resident, execution proceeds normally.
Valid-Invalid bit
A valid – invalid bit is associated with each page table entry.
Valid bit represents the associated page is in memory.
In-Valid bit represents
(d) invalid page or
(e) valid page but is currently on the disk
Page table when some pages are not in main memory
Advantages
Programs could be written for a much larger address space (virtual memory space)
than physically exists on the computer.
Because each process is only using a fraction of their total address space, there is
more memory left for other programs, improving CPU utilization and system
throughput.
Less I/O is needed for swapping processes in and out of RAM, speeding things
up.
7. Explain the principles of segmented and paging implemented in memory with a diagram.
(NOV/DEC2013)(MAY/JUNE2016)
Paging
(f)It is a memory management scheme that permits the physical address space of a
process to be noncontiguous.
(g) It avoids the considerable problem of fitting the varying size memory chunks on tothe
backing store.
Basic Method
Divide logical memory into blocks of same size called “pages”.
Divide physical memory into fixed-sized blocks called “frames”
Page size is a power of 2, between 512 bytes and 16MB.
Address Translation Scheme
Address generated by CPU is divided into:
o Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
o Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit
Allocation
When a process arrives into the system, its size (expressed in pages) is examined.
Each page of process needs one frame. Thus if the process requires ‘n‘ pages, atleast ‘n‘
frames must be available in memory.
If ‘n‘ frames are available, they are allocated to this arriving process.
The 1st page of the process is loaded into one of the allocated frames & the frame
number is put into the page table.
Repeat the above step for the next pages & so on.
When we use a paging scheme, we have no external fragmentation: Any free frame can be
allocated to a process that needs it. However, we may have some internal fragmentation.
Frame table: It is used to determine which frames are allocated, which frames are available, how
many total frames are there, and so on.(ie) It contains all the information about the frames in the
physical memory.
The hardware implementation of the page table can be done in several ways.
The page table is implemented as a set of dedicated registers. These registers should be
built with very high-speed logic to make the paging-address translation efficient. If the
page table is very large (for example, 1 million entries). The use of fast registers to
implement the page table is not feasible.
The page table is kept in main memory. Page-table base register (PTBR) points to the
page table. In this scheme every data/instruction access requires two memory accesses.
One for the page table and one for the data / instruction
The two memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers (TLBs)
Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely
identifies each process to provide address-space protection for that process
If the page number is not in the TLB (known as a TLB miss), a memory reference to the page
table must be made. When the frame number is obtained, we can use it to access memory. In
addition, add the page number and frame number to the TLB, so that they will be found quickly
on the next reference. If the TLB is already full of entries, the operating system must select one
for replacement.
Hit ratio: Percentage of times that a particular page is found in the TLB.
For example hit ratio is 80% means that the desired page number in the TLB is 80% of the
time.
Effective Access Time
Assume hit ratio is 80%.
If it takes 20ns to search TLB & 100ns to access memory, then the memory
access takes 120ns(TLB hit)
If we fail to find page no. in TLB (20ns), then we must 1st access memory for
page table (100ns) & then access the desired byte in memory (100ns). Therefore
Total = 20 + 100 + 100
= 220 ns(TLB miss).
Memory Protection
Memory protection implemented by associating protection bit with each frame
One bit can define a page to be read-write or read-only. Every reference to memory
goes through the page table to find the correct frame number. An attempt to write to a
read-only page causes a hardware trap to the operating system
Valid-invalid bit attached to each entry in the page table:
“valid (v)” indicates that the associated page is in the process logical address space,
and is thus a legal page
“invalid (i)” indicates that the page is not in the process logical address space
page-table length register (PTLR), to indicate the size of the page table. This value
is checked against every logical address to verify that the address is in the valid range
for the process.
Where p1 is an index into the outer page table, and p2 is the displacement within the page
of the inner page table. This is also Known as forward-mapped page table
Address-Translation Scheme
Address-translation scheme for a two-level 32-bit paging architecture
It requires more number of memory accesses, when the number of levels is increased.
Each entry in hash table contains a linked list of elements that hash to the same
location.
Each entry consists of;
(a) Virtual page numbers
(b) Value of mapped page frame.
(c) Pointer to the next element in the linked list.
Working Procedure:
o The virtual page number in the virtual address is hashed into the hash table.
o Virtual page number is compared to field (a) in the 1st element in the linked list.
o If there is a match, the corresponding page frame (field (b)) is used to formthe
desired physical address.
o If there is no match, subsequent entries in the linked list are searched for a
matching virtual page number.
Clustered page table: It is a variation of hashed page table & is similar to hashed page table
except that each entry in the hash table refers to several pages rather than a single page.
Shared Pages
One advantage of paging is the possibility of sharing common code.
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
Shared code must appear in same location in the logical address space of all processes
Reentrant code (Pure code): Non-self modifying code. If the code is reentrant, then it never
changes during execution. Thus two or more processes can execute the same code at the same
time.
Segmentation:
Memory-management scheme that supports user view of memory.
A program is a collection of segments. A segment is a logical unit such as: Main program,
Procedure, Function, Method, Object, Local variables, global variables, Common block,
Stack, Symbol table, arrays
Segmentation Hardware
A logical address consists of two parts: a segment number, s, and an offset into that segment,
d. The segment number is used as an index to the segment table. The offset d of the logical
address must be between 0 and the segment limit. If it is not, we trap to the operating system
(logical addressing attempt beyond, end of segment). When an offset is legal, it is added to
the segment base to produce the address in physical memory of the desired byte. The segment
table is thus essentially an array of base-limit register pairs.
EXAMPLE:
For example, segment 2 is 400 bytes long and begins at location 4300. Thus, a reference to
byte 53 of segment 2 is mapped onto location 4300 + 53 = 4353.
Sharing of Segments
Where p1 is an index into the outer page table, and p2 is the displacement within the page
of the inner page table. This is also Known as forward-mapped page table
Address-Translation Scheme
Address-translation scheme for a two-level 32-bit paging architecture
It requires more number of memory accesses, when the number of levels is increased.
Each entry in hash table contains a linked list of elements that hash to the same
location.
Each entry consists of;
(d) Virtual page numbers
(e) Value of mapped page frame.
(f) Pointer to the next element in the linked list.
Working Procedure:
o The virtual page number in the virtual address is hashed into the hash table.
o Virtual page number is compared to field (a) in the 1st element in the linked list.
o If there is a match, the corresponding page frame (field (b)) is used to formthe
desired physical address.
o If there is no match, subsequent entries in the linked list are searched for a
matching virtual page number.
Clustered page table: It is a variation of hashed page table & is similar to hashed page table
except that each entry in the hash table refers to several pages rather than a single page.
Shared Pages
One advantage of paging is the possibility of sharing common code.
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors,
compilers, window systems).
Shared code must appear in same location in the logical address space of all processes
Reentrant code (Pure code): Non-self modifying code. If the code is reentrant, then it never
changes during execution. Thus two or more processes can execute the same code at the same
time.
Page Replacement
If no frames are free, we could find one that is not currently being used & free it.
We can free a frame by writing its contents to swap space & changing the page table to
indicate that the page is no longer in memory.
Then we can use that freed frame to hold the page for which the process faulted.
Note:
If no frames are free, two page transfers are required & this situation effectively
doubles the page- fault service time.
10. Explain any four page replacement algorithms in detail? (NOV/DEC 2009) (NOV/DEC
2013) NOV/DEC2012) (MAY/JUNE2016)
Page Replacement
If no frames are free, we could find one that is not currently being used & free it.
We can free a frame by writing its contents to swap space & changing the page table to
indicate that the page is no longer in memory.
Then we can use that freed frame to hold the page for which the process faulted.
13. What is thrashing? Explain the working set model in detail. (MAY/JUNE 2009)
Given memory partitions of 100KB, 500KB, 200KB, 300KB and 600KB(in order), how would
each of the first-fit, best-fit and worst-fit algorithms place processes of212KB, 417KB, 12KB
and 426KB(in order)? Which algorithm makes the most efficient use of memory? (NOV/DEC
2008)
Refer Notes (Unit-3)
14. Explain in briefly and compare, fixed and dynamic memory partitioning
schemes.(NOV/DEC2012)
Contiguous Allocation
(l) Main memory usually into two partitions:
o Resident operating system, usually held in low memory with interrupt vector.
o User processes then held in high memory.
(m) Single-partition allocation
o Relocation-register scheme used to protect user processes from each other, and from
changing operating-system code and data.
o Relocation register contains value of smallest physical address; limit register contains
range of logical addresses – each logical address must be less than the limit register.
Memory Protection
(n) It should consider;
Protecting the OS from user process.
Protecting user processes from one another.
(o) The above protection is done by “Relocation-register & Limit-register scheme ―
(p) Relocation register contains value of smallest physical address i.e base value.
(q) Limit register contains range of logical addresses – each logical address must be less than the
limit register
Variable-partition method:
(t) Divide memory into variable size partitions, depending upon the size of the incoming process.
(u) When a process terminates, the partition becomes available for another process.
(v) As processes complete and leave they create holes in the main memory.
(w) Hole – block of available memory; holes of various size are scattered throughout
memory.
15. Consider the following page reference string (MAY/JUNE 2012) (APR/MAY 2015)
1,2,3,4,2,1,5,6,2,1,3,7,6,3,2,1,3,6.
How many page faults would occur for the following replacement algorithms, assuming one,
two, three and four frames? LRU replacement, FIFO replacement Optimal replacement
Refer notes .. Page No.409-420
16. When do page faults occur? Consider the reference string: (NOV/DEC 2017)
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.
How many page faults and page fault rate occur for the FIFO, LRU and optimal replacement
algorithms, assuming one, two, three, four page frames?
FIFO:
Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 4 4 4 4 6 6 6 6 3 3 3 3 2 2 2 2 6
2 2 2 2 2 1 1 1 2 2 2 2 7 7 7 7 1 1 1 1
3 3 3 3 3 5 5 5 1 1 1 1 6 6 6 6 6 3 3
Page Hit=4
Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1 5 5 5 5 5 3 3 3 3 3 1 1 1 1
2 2 2 2 2 2 2 6 6 6 6 6 7 7 7 7 7 7 3 3
3 3 3 3 3 3 3 2 2 2 2 2 6 6 6 6 6 6 6
4 4 4 4 4 4 4 1 1 1 1 1 1 2 2 2 2 2
Page Hit=6
No. of page faults does not increase / Decrease when more frames are allocated to the process.
LRU:
Frame=3
Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
0 1 1 1 4 4 4 5 5 5 1 1 1 7 7 7 2 2 2 2 2
1 2 2 2 2 2 2 6 6 6 6 3 3 3 3 3 3 3 3 3
2 3 3 3 1 1 1 2 2 2 2 2 6 6 6 1 1 1 6
No. of pages = 20
Page fault= 15
Page Hit = 5
=15 / 20 = 0.75
OPTIMAL:
Frame=3
Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
0 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 6
1 2 2 2 2 2 2 2 2 2 2 2 7 7 7 2 2 2 2 2
2 3 4 4 4 5 6 6 6 6 6 6 6 6 6 1 1 1 1
Page fault= 11
Page Hit = 9
17. Explain the concept of demand paging in detail with neat diagram (MAY/JUNE 2014)
(NOV/DEC 2013)
Demand Paging
A demand paging system is quite similar to a paging system with swapping. When we want to
execute a process, we swap it into memory. Rather than swapping the entire process into
memory, however, we use a lazy swapper called pager. Lazy Swapper - Never swaps a page
into memory unless that page will be needed
When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again. Instead of swapping in a whole process, the pager brings only
those necessary pages into memory. Thus, it avoids reading into memory pages that will not be
used in anyway, decreasing the swap time and the amount of physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those
pages that are on the disk using the valid-invalid bit scheme. Where valid and invalid pages can
be checked by checking the bit. Marking a page will have no effect if the process never attempts
to access the page. While the process executes and accesses pages that are memory resident,
execution proceeds normally.
Valid-Invalid bit
A valid – invalid bit is associated with each page table entry. Valid
bit represents the associated page is in memory.
In-Valid bit represents
(x) invalid page or
(y) valid page but is currently on the disk
Page table when some pages are not in main memory
Advantages
(a) Programs could be written for a much larger address space (virtual memory space) than
physically exists on the computer.
(b) Because each process is only using a fraction of their total address space, there is more
memory left for other programs, improving CPU utilization and system throughput.
(c) Less I/O is needed for swapping processes in and out of RAM, speeding things up.
18. Why are translation look-aside buffers important? Explain the details stored in a TLB table
entry? (APRIL/MAY 2017, MAY/JUNE 2014) Refer notes ..Page No.364-377
Effect of thrashing
If any process that does not have ''enough" frames, it will quickly page-fault. At this point, it
must replace some page. However, since all its pages are in active use, it must replace a page that
will be needed again right away. Consequently, it quickly faults again, and again, and again,
replacing pages that it must bring back inimmediately.
High paging activity is called thrashing. A process is thrashing if it is spending more time paging
than executing.
If a process does not have ―enough pages, the page-fault rate is very high. This leads to:
- low CPU utilization operating system thinks that it needs to increase the degree of
multiprogramming
another process is added to the system When the CPU utilization is low, the
OS increases the degree of multiprogramming.
If global replacement is used then as processes enter the main memory they tend to steal
frames belonging to other processes.
Eventually all processes will not have enough frames and hence the page fault rate becomes
very high.
Thus swapping in and swapping out of pages only takes place.
This is the cause of thrashing.
limit register
Solution:
First-fit: Allocate the first hole that is big enough.
Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by
size. Produces the smallest leftover hole.
Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover
hole
First-fit and best-fit are better than worst-fit in terms of speed and storage utilization
Fragmentation
External Fragmentation – This takes place when enough total memory space exists to satisfy a
request, but it is not contiguous i.e., storage is fragmented into a large number of small holes scattered
throughout the main memory.
Internal Fragmentation – Allocated memory may be slightly larger than requested memory.
File permissions
File dates (create, access,
write)
File owner, group, ACL
File size
File data blocks
Three advantages:-
a. Bugs are less likely to cause an operating system crash.
b. Performance can be improved by utilizing dedicated hardware and hard-coded
algorithms. The kernel is simplified by moving algorithms out of it.
Three disadvantages:
a. Bugs are harder to fix - a new firmware version or new hardware is needed
b. Improving algorithms likewise require a hardware update rather than just kernel or
device driver update
c. Embedded algorithms could conflict with application’s use of the device, causing
decreased performance.
27. How free-space is managed using bit vector implementation? (May/June 2018)
The free-space list is implemented as a bit map or bit vector. Each block is represented by 1
bit. If the block is free, the bit is 1; if the block is allocated, the bit is 0.
7200 rpm gives 120 rotations per second. Thus a full rotation takes 8.33 ms and the average
rotational latency (a half rotation) takes 4.167 ms.
PART-B
1. Explain the different disk scheduling algorithms with examples. (APRIL/MAY 2018,
NOV/DEC 2019, APRIL/MAY 2010, MAY/JUNE 2012, APRIL/MAY 2011, MAY/JUNE 2013,
MAY/JUNE 2014)
One of the responsibilities of the operating system is to use the hardware efficiently. For the disk
drives,
1. A fast access time and
One of the responsibilities of the operating system is to use the hardware efficiently.
For the disk drives,
1. A fast access time and
2. High disk bandwidth.
□ The access time has two major components;
✓ The seek time is the time for the disk arm to move the heads to the cylinder containing
the desired sector.
✓ The rotational latency is the additional time waiting for the disk to rotate the desired
sector to the disk head.
□ The disk bandwidth is the total number of bytes transferred, divided by the total time between the
first request for service and the completion of the last transfer.
1. FCFS Scheduling:
The simplest form of disk scheduling is, of course, the first-come, first-served (FCFS)
algorithm. This algorithm is intrinsically fair, but it generally does not provide the fastest service.
Consider, for example, a disk queue with requests for I/O to blocks on cylinders
I/O to blocks on cylinders
98, 183, 37, 122, 14, 124, 65, 67,
If the disk head is initially at cylinder 53, it will first move from 53 to 98, then to 183, 37,
122, 14, 124, 65, and finally to 67, for a total head movement of 640 cylinders. The wild swing
from 122 to 14 and then back to 124 illustrates the problem with this schedule. If the requests for
cylinders 37 and 14 could be serviced together, before or after the requests for 122 and 124, the total
head movement could be decreased substantially, and performance could be thereby improved.
2. SSTF (shortest-seek-time-first)Scheduling
Service all the requests close to the current head position, before moving the head far
away to service other requests. That is selects the request with the minimum seek time from
the current head position.
3. SCAN Scheduling
The disk head starts at one end of the disk, and moves toward the other end, servicing requests
as it reaches each cylinder, until it gets to the other end of the disk. At the other end, the direction of
head movement is reversed, and servicing continues. The head continuously scans back and forth
across the disk
.
4. C-SCAN Scheduling
Variant of SCAN designed to provide a more uniform wait time. It moves the head from one
end of the disk to the other, servicing requests along the way. When the head reaches the other end,
however, it immediately returns to the beginning of the disk, without servicing any requests on the
return trip.
5. LOOK Scheduling
Both SCAN and C-SCAN move the disk arm across the full width of the disk. In
this, the arm goes only as far as the final request in each direction. Then, it reverses direction
immediately, without going all the way to the end of the disk.
2. Explain and compare FCFS, SSTF, C-SCAN and C-LOOK disk scheduling
algorithms with examples. (NOV/DEC 2012)
1. FCFS Scheduling:
The simplest form of disk scheduling is, of course, the first-come, first-served (FCFS)
algorithm. This algorithm is intrinsically fair, but it generally does not provide the fastest service.
Consider, for example, a disk queue with requests for I/O to blocks on cylinders
I/O to blocks on cylinders
98, 183, 37, 122, 14, 124, 65, 67,
If the disk head is initially at cylinder 53, it will first move from 53 to 98, then to 183, 37,
122, 14, 124, 65, and finally to 67, for a total head movement of 640 cylinders. The wild swing
from 122 to 14 and then back to 124 illustrates the problem with this schedule. If the requests for
cylinders 37 and 14 could be serviced together, before or after the requests for 122 and 124, the total
head movement could be decreased substantially, and performance could be thereby improved.
2. SSTF (shortest-seek-time-first)Scheduling
Service all the requests close to the current head position, before moving the head far
away to service other requests. That is selects the request with the minimum seek time from
the current head position.
3. C-SCAN Scheduling
Variant of SCAN designed to provide a more uniform wait time. It moves the head from one
end of the disk to the other, servicing requests along the way. When the head reaches the other end,
however, it immediately returns to the beginning of the disk, without servicing any requests on the
return trip.
4. LOOK Scheduling
Both SCAN and C-SCAN move the disk arm across the full width of the disk. In
this, the arm goes only as far as the final request in each direction. Then, it reverses direction
immediately, without going all the way to the end of the disk.
3. Write short notes on disk management. (APRIL/MAY 2019, NOV/DEC 2009) (April/May
2019)
Disk Management
1. Disk Formatting:
Before a disk can store data, the sector is divided into various partitions. This process is
called low- level formatting or physical formatting. It fills the disk with a special data
structure for each sector. The data structure for a sector consists of
✓ Header,
✓ Data area (usually 512 bytes in size), and
✓ Trailer.
The header and trailer contain information used by the disk controller, such as a sector number and
an
To use a disk to hold files, the operating system still needs to record its own data structures
on thedisk. It does so in two steps.
(a) The first step is Partition the disk into one or more groups of cylinders. Among the
partitions, one partition can hold a copy of the OS‘s executable code, while another holds
user files.
(b) The second step is logical formatting .The operating system stores the initial file-system
data structures onto the disk. These data structures may include maps of free and allocated
space and aninitial empty directory.
2. Boot Block:
2. It is at a fixed location that the processor can start executing when powered up or reset.
The full bootstrap program is stored in a partition called the boot blocks, at a fixed location
on the disk. A disk that has a boot partition is called a boot disk or system disk.
The code in the boot ROM instructs the disk controller to read the boot blocks into memory
and then starts executing that code.
Bootstrap loader: load the entire operating system from a non-fixed location on disk, and to
start the operating system running.
3. Bad Blocks:
If blocks go bad during normal operation, a special program must be run manually to
search for the bad blocks and to lock them away as before. Data that resided on the bad blocks
usually are lost.
The controller maintains a list of bad blocks on the disk. Then the controller can be told
to replace each bad sector logically with one of the spare sectors. This scheme is known as sector
sparing or forwarding.
A typical bad-sector transaction might be as follows:
1. The operating system tries to read logical block 87.
2. The controller calculates the ECC and finds that the sector is bad.
For an example, suppose that logical block 17 becomes defective, and the first available
spare follows sector 202. Then, sector slipping would remap all the sectors from 17 to 202,
moving them all down one spot. That is, sector 202 would be copied into the spare, then sector
201 into 202, and then 200 into 201, and so on, until sector 18 is copied into sector 19. Slipping
the sectors in this way frees up the space of sector 18, so sector 17 can be mapped to it.
4. Write short notes on file system in Linux. (NOV/DEC 2009) (NOV/DEC 2014)
File Concept
Examples of files:
• A text file is a sequence of characters organized into lines (and possibly pages). A source file
is a sequence of subroutines and functions, each of which is further organized as declarations
followed by executable statements. An object file is a sequence of bytes organized into blocks
understandable by the system’s linker.
An executable file is a series of code sections that the loader can bring into memory and execute.
File Attributes
• Name: The symbolic file name is the only information kept in human readable form.
• Identifier: This unique tag, usually a number identifies the file within the file system. It is the
non-human readable name for the file.
• Type: This information is needed for those systems that support different types.
• Location: This information is a pointer to a device and to the location of the file on that device.
• Size: The current size of the file (in bytes, words or blocks)and possibly the maximum
allowed sizeare included in this attribute.
• Protection: Access-control information determines who can do reading, writing, executing and so on.
• Time, date and user identification: This information may be kept for creation, last
modification and last use. These data can be useful for protection, security and usage monitoring.
File Operations
• Creating a file
• Writing a file
• Reading a file
• Repositioning within a file
• Deleting a file
• Truncating a file
Access Methods
1. Sequential Access
a. The simplest access method is sequential access. Information in the file is
processed in order, one record after the other. This mode of access is by far
the most common; for example, editors and compilers usually access files in
this fashion.
The bulk of the operations on a file is reads and writes. A read operation reads the next portion of the
file and automatically advances a file pointer, which tracks the I/O location. Similarly, a write appends to
the end of the file and advances to the end of the newly written material (the new end of file). Such a file
can be reset to the beginning and, on some systems, a program may be able to skip forward or back ward n
records, for some integer n-perhaps only for n=1. Sequential access is based on a tape model of a file, and
works as well on sequential-access devices as it does on random – access ones.
2. Direct Access
Another method is direct access (or relative access). A file is made up of fixed length logical
records that allow programs to read and write records rapidly in no particular order. The direct-
access methods is based on a disk model of a file, since disks allow random access to any file block.
For direct access, the file is viewed as a numbered sequence of blocks or records. A direct-
access file allows arbitrary blocks to be read or written. Thus, we may read block 14, then read
block 53, and then write block7. There are no restrictions on the order of reading or writing for a
direct-access file.
Direct – access files are of great use for immediate access to large amounts of information.
Database is often of this type. When a query concerning a particular subject arrives, we compute
which block contains the answer, and then read that block directly to provide the desired
information.
5. Write an elaborate note on RAID and RAID Levels. (APRIL/MAY 2010, MAY/JUNE
2012, NOV/DEC 2012, MAY/JUNE 2013)
RAID (Redundant Arrays of Independent Disks)
RAID, or “Redundant Arrays of Independent Disks” is a technique which makes use of a combination of
multiple disks instead of using a single disk for increased performance, data redundancy or both.
Key evaluation points for a RAID System
Reliability: How many disk faults can the system tolerate?
Availability: What fraction of the total session time is a system in uptime mode, i.e. how available is
the system for actual use?
Performance: How good is the response time? How high is the throughput (rate of processing
work)? Note that performance contains a lot of parameters and not just the two.
Capacity: Given a set of N disks each with B blocks, how much useful capacity is available to the
user?
RAID is very transparent to the underlying system. This means, to the host system, it appears as a single
big disk presenting itself as a linear array of blocks. This allows older technologies to be replaced by
RAID without making too many changes in the existing code.
Different RAID levels
RAID-0 (Striping)
Blocks are “striped” across disks.
Evaluation:
Reliability: 0
There is no duplication of data. Hence, a block once lost cannot be recovered.
Capacity: N*B
The entire space is being used to store data. Since there is no duplication, N disks each having B
blocks are fully utilized.
RAID-1 (Mirroring)
More than one copy of each block is stored in a separate disk. Thus, every block has two (or
more)copies,lyingondifferentdisks.
Evaluation:
Reliability:1
RAID-4 allows recovery of at most 1 disk failure (because of the way parity works). If more than
one disk fails, there is no way to recover the data.
Capacity:(N-1)*B
One disk in the system is reserved for storing the parity. Hence, (N-1) disks are made available for
data storage, each disk having B blocks.
RAID-5 (Block-Level Striping with Distributed Parity)
This is a slight modification of the RAID-4 system where the only difference is that the parity rotates
among the drives.
6. Explain the services provided by Kernel I/O subsystem. (APRIL/MAY 2017, APRIL/MAY
2010, APRIL/MAY 2011, NOV/DEC2012, MAY/JUNE 2013)
Kernel I/O Subsystem
I/O Scheduling:
Buffering:
Buffer: A memory area that stores data while they are transferred between two devices or
between a device and an application.
Reasons for buffering:
a) To cope with a speed mismatch between the producer and consumer of a data stream.
Copy semantics: Suppose that an application has a buffer of data that it wishes to write to
disk. It calls the write () system call, providing a pointer to the buffer and an integer specifying the
number of bytes to write.
After the system call returns, what happens if the application changes the contents of the buffer?
With copy semantics, the version of the data written to disk is guaranteed to be the version at the
time of the application system call, independent of any subsequent changes in the application's
buffer.
A simple way that the operating system can guarantee copy semantics is for the write()
system call to copy the application data into a kernel buffer before returning control to the
application. The disk write is performed from the kernel buffer, so that subsequent changes to the
application buffer have no effect.
Caching
A cache is a region of fast memory that holds copies of data. Access to the cached copy is
more efficient than access to the original
Cache vs buffer: A buffer may hold the only existing copy of a data item, whereas a cache
just holds a copy on faster storage of an item that resides elsewhere.
When the kernel receives a file I/O request,
1. The kernel first accesses the buffer cache to see whether that region of the file is already
available in main memory.
2. If so, a physical disk I/O can be avoided or deferred. Also, disk writes are accumulated in
the buffer cache for several seconds, so that large transfers are gathered to allow efficient
write schedules.
Spool: A buffer that holds output for a device, such as a printer, that cannot accept
interleaved data streams. A printer can serve only one job at a time, several applications may wish
to print their output concurrently, without having their output mixed together
The os provides a control interface that enables users and system administrators ;
a) To display the queue,
Error Handling:
• An operating system that uses protected memory can guard against many kinds of hardware and
application errors.
• OS can recover from disk read, device unavailable, transient write failures
• Most return an error number or code when I/O request fails
System error logs hold problem reports
7. Explain the file allocation methods. (APRIL/MAY 2018, April/May 2019, APRIL/MAY 2010)
Allocation Methods
• The main problem is how to allocate space to these files so that disk space is utilized effectively and
files can be accessed quickly.
• There are three major methods of allocating disk space:
1. Contiguous Allocation
2. Linked Allocation
3. Indexed Allocation
1. Contiguous Allocation
• The contiguous – allocation method requires each file to occupy a set of contiguous blocks on the
disk.
• Contiguous allocation of a file is defined by the disk address and length (in block units) of the
first block. If the file is n blocks long and starts at location b, then it occupies blocks b,. b+1,
b+2,…., b+n-1.
• The directory entry for each file indicates the address of the starting block and the length
of thearea allocated for this file.
Disadvantages:
• Use a modified contiguous allocation scheme, in which a contiguous chunk of space called as an
extent is allocated initially and then, when that amount is not large enough another chunk of
contiguous space an extent is added to the initial allocation.
• Internal fragmentation can still be a problem if the extents are too large, and external
fragmentation can be a problem as extents of varying sizes are allocated and deallocated.
2. Linked Allocation
grow as long as free blocks are available consequently, it is never necessary to compacts disk space.
Disadvantages:
Reliability
• Since the files are linked together by pointers scattered all over the disk hardware failure might
result in picking up the wrong pointer. This error could result in linking into the free- space list or
into another file. Partial solution are to use doubly linked lists or to store the file names in a relative
block number in each block; however, these schemes require even more over head for each file.
• An important variation on the linked allocation method is the use of a file allocation table(FAT).
• This simple but efficient method of disk- space allocation is used by the MS-DOS and OS/2
operating systems.
• A section of disk at beginning of each partition is set aside to contain the table.
• The table has entry for each disk block, and is indexed by block number.
• The FAT is much as is a linked list.
• The directory entry contains the block number the first block of the file.
• The table entry indexed by that block number contains the block number of the next block in the file.
• This chain continues until the last block which has a special end – of – file value as the table entry.
• Unused blocks are indicated by a 0 table value.
• Allocating a new block file is a simple matter of finding the first 0 – valued table entry, and
replacing the previous end of file value with the address of the new block.
• The 0 is replaced with the end – of – file value, an illustrative example is the FAT structure
for a file consisting of disk blocks 217,618, and 339.
3. Indexed Allocation
• Linked allocation solves the external – fragmentation and size- declaration problems of
contiguous allocation.
• Linked allocation cannot support efficient direct access, since the pointers to the blocks are scattered
with
the blocks themselves all over the disk and need to be retrieved in order.
• Indexed allocation solves this problem by bringing all the pointers together into one location:
the index block.
• Each file has its own index block, which is an array of disk – block addresses.
• The ith entry in the index block points to the ith block of the file.
• The directory contains the address of the index block.
• To read the ith block, we use the pointer in the ith index – block entry to find and read the
desired block this scheme is similar to the paging scheme.
• When the file is created, all pointers in the pointers in the index block are set to nil. when the
ith block is first written, a block is obtained from the free space manager, and its address is put in
the ith index – block entry.
• Indexed allocation supports direct access, without suffering from external fragmentation, because any
free block on the disk may satisfy a request for more space.
Disadvantages
1. Pointer Overhead
• Indexed allocation does suffer from wasted space. The pointer over head of the index block is generally
greater than the pointer over head of linked allocation.
If the index block is too small, however, it will not be able to hold enough pointers for a large
file, and a mechanism will have to be available to deal with this issue:
• Linked Scheme: An index block is normally one disk block. Thus, it can be read and written
directlyby itself. To allow for large files, we may link together several index blocks.
• Multilevel index: A variant of the linked representation is to use a first level index block to
point to aset of second – level index blocks.
• Combined scheme:
o Another alternative, used in the UFS, is to keep the first, say, 15 pointers of the index block in the file’s
in node.
o The first 12 of these pointers point to direct blocks; that is for small ( no more than 12 blocks)
files donot need a separate index block
o The next pointer is the address of a single indirect block.
□ The single indirect block is an index block, containing not data, but rather the addresses of blocks that do
contain data.
o Then there is a double indirect block pointer, which contains the address of a block that contain
pointers to the actual data blocks. The last pointer would contain pointers to the actual data blocks.
o The last pointer would contain the address of a triple indirect block.
8. Explain the role of Access Matrix for protection in files. (APRIL/MAY 2010)
File Protection
A directory is a container that is used to contain folders and file. It organizes files and folders into a
hierarchical manner.
There are several logical structures of a directory, these are given below.
Single-level directory –
Single level directory is simplest directory structure.In it all files are contained in same directory which
make it easy to support and understand.
A single level directory has a significant limitation, however, when the number of files increases or when
the system has more than one user. Since all the files are in the same directory, they must have the unique
name . if two users call their dataset test, then the unique name rule violated.
1. Advantages:
Since it is a single directory, so its implementation is very easy.
If files are smaller in size, searching will faster.
The operations like file creation, searching, deletion, updating are very easy in such a directory
structure.
Disadvantages:
There may change of name collision because two files can not have the same name.
Searching will become time taking if directory will large.
In this cannot group the same type of files together.
2. Two-level directory –
As we have seen, a single level directory often leads to confusion of files names among different
users. the solution to this problem is to create a separate directory for each user.
In the two-level directory structure, each user has there own user files directory (UFD). The UFDs
has similar structures, but each lists only the files of a single user. system’s master file directory
(MFD) is searches whenever a new user id=s logged in. The MFD is indexed by username or
account number, and each entry points to the UFD for that user.
Advantages:
We can give full path like /User-name/directory-name/.
Different users can have same directory as well as file name.
Searching of files become more easy due to path name and user-grouping.
Disadvantages:
1.
A user is not allowed to share files with other users.
Still it not very scalable, two files of the same type cannot be grouped together in the same
user.
2. Tree-structured directory –
Once we have seen a two-level directory as a tree of height 2, the natural generalization is to extend
the directory structure to a tree of arbitrary height.
This generalization allows the user to create there own subdirectories and to organize on their files
accordingly.
A tree structure is the most common directory structure. The tree has a root directory, and every file in the
system have a unique path.
Advantages:
Very generalize, since full path name can be given.
Very scalable, the probability of name collision is less.
Searching becomes very easy, we can use both absolute path as well as relative.
Disadvantages:
Every file does not fit into the hierarchical model, files may be saved into multiple directories.
We can not share files.
It is inefficient, because accessing a file may go under multiple directories.
Acyclic graph directory –
An acyclic graph is a graph with no cycle and allows to share subdirectories and files. The same file or
subdirectories may be in two different directories. It is a natural generalization of the tree-structured
directory.
It is used in the situation like when two programmers are working on a joint project and they need
to access files. The associated files are stored in a subdirectory, separating them from other
projects and files of other programmers, since they are working on a joint project so they want the
subdirectories to be into their own directories. The common subdirectories should be shared. So
here we use Acyclic directories.
It is the point to note that shared file is not the same as copy file . If any programmer makes some
changes in the subdirectory it will reflect in both subdirectories.
Advantages:
1.
We can share files.
Searching is easy due to different-different paths.
Disadvantages:
We share the files via linking, in case of deleting it may create the problem,
If the link is softlink then after deleting the file we left with a dangling pointer.
In case of hardlink, to delete a file we have to delete all the reference associated with it.
2. General graph directory structure –
In general graph directory structure, cycles are allowed within a directory structure where multiple
directories can be derived from more than one parent directory.
The main problem with this kind of directory structure is to calculate total size or space that has been
taken by the files and directories.
Advantages:
It allows cycles.
It is more flexible than other directories structure.
Disadvantages:
It is more costly than others.
It needs garbage collection.
10. Explain the various file directory structures. (NOV/DEC 2012)
Disadvantage:
• To condense the length of the access control list, many systems recognize three classifications of users
in connection with each file:
12. Describe the two level and acyclic graph schemes for defining the logical structure of a
directory. (MAY/JUNE 2013)
Disadvantage:
13. Explain the Linked list and indexed file allocation methods with neat diagram. Mention their
advantages and disadvantages. (MAY/JUNE 2013)(April/May 2019)
Linked Allocation
grow as long as free blocks are available consequently, it is never necessary to compacts disk space.
Disadvantages:
• To find the ith block of a file, we must start at the beginning of that file, and follow the pointers
until we get to the ith block. Each aces to a pointer requires a disk read, and sometimes a disk seek
consequently, it is inefficient to support a direct- access capability for linked allocation files.
3. Reliability
• Since the files are linked together by pointers scattered all over the disk hardware failure might
result in picking up the wrong pointer. This error could result in linking into the free- space list or
into another file. Partial solution are to use doubly linked lists or to store the file names in a relative
block number in each block; however, these schemes require even more over head for each file.
• An important variation on the linked allocation method is the use of a file allocation table(FAT).
• This simple but efficient method of disk- space allocation is used by the MS-DOS and OS/2
operating systems.
• A section of disk at beginning of each partition is set aside to contain the table.
• The table has entry for each disk block, and is indexed by block number.
• The FAT is much as is a linked list.
• The directory entry contains the block number the first block of the file.
• The table entry indexed by that block number contains the block number of the next block in the file.
• This chain continues until the last block which has a special end – of – file value as the table entry.
• Unused blocks are indicated by a 0 table value.
• Allocating a new block file is a simple matter of finding the first 0 – valued table entry, and
replacing the previous end of file value with the address of the new block.
• The 0 is replaced with the end – of – file value, an illustrative example is the FAT structure
for a file consisting of disk blocks 217,618, and 339.
Indexed Allocation
• Linked allocation solves the external – fragmentation and size- declaration problems of
contiguous allocation.
• Linked allocation cannot support efficient direct access, since the pointers to the blocks are scattered
with the blocks themselves all over the disk and need to be retrieved in order.
• Indexed allocation solves this problem by bringing all the pointers together into one location:
the index block.
• Each file has its own index block, which is an array of disk – block addresses.
• The ith entry in the index block points to the ith block of the file.
• The directory contains the address of the index block .
• To read the ith block, we use the pointer in the ith index – block entry to find and read the
desired block this scheme is similar to the paging scheme .
• When the file is created, all pointers in the pointers in the index block are set to nil. when the
ith block is first written, a block is obtained from the free space manager, and its address is put in
the ith index – block entry.
• Indexed allocation supports direct access, without suffering from external fragmentation, because any
free block on the disk may satisfy a request for more space.
Disadvantages
1. Pointer Overhead
• Indexed allocation does suffer from wasted space. The pointer over head of the index block is generally
greater than the pointer over head of linked allocation.
If the index block is too small, however, it will not be able to hold enough pointers for a large
file, and a mechanism will have to be available to deal with this issue:
• Linked Scheme: An index block is normally one disk block. Thus, it can be read and written
directlyby itself. To allow for large files, we may link together several index blocks.
• Multilevel index: A variant of the linked representation is to use a first level index block to
point to aset of second – level index blocks.
• Combined scheme:
o Another alternative, used in the UFS, is to keep the first, say, 15 pointers of the index block in the file’s
in node.
o The first 12 of these pointers point to direct blocks; that is for small ( no more than 12 blocks)
files do not need a separate index block The next pointer is the address of a single indirect
block.
o The single indirect block is an index block, containing not data, but rather the addresses of blocks that do
contain data.
o Then there is a double indirect block pointer, which contains the address of a block that contain
pointers to the actual data blocks. The last pointer would contain pointers to the actual data blocks.
o The last pointer would contain the address of a triple indirect block.
UNIT V - CASE STUDY
PART – A
The Linux file system is a hierarchically structured tree. Linux distinguishes between uppercase
and lowercase letters in the file system.
18. Write short notes on driver registration in Linux (April/May 2019)
driver_register () is the low-level function used to register a device driver with the bus. It adds the driver to
the bus's list of drivers. When a device driver is registered with the bus, the core walks through the bus's
list of devices and calls the bus's match callback for each device that does not have a driver associated
with it in order to find out if there are any devices that the driver can handle.
When a match occurs, the device and the device driver are bound together. The process of associating a
device with a device driver is called binding.
Back to the registration of drivers with our packt bus; one has to use packt_register_driver(struct packt_driver
*driver), which is a wrapper around driver_register()
PART-B
1. Explain in detail about the concepts of Linux system.
A Linux-based system is a modular Unix-like operating system. It derives much of its basic design from
principles established in UNIX. Such a system uses a monolithic kernel which handles process control,
networking, and peripheral and file system access.
Shell - Linux provides a special interpreter program which can be used to execute commands of the
operating system.
Security - Linux provides user security using authentication features like password protection/
controlled access to specific files/ encryption of data.
Kernel - Kernel is the core part of Linux. It is responsible for all major activities of this operating
system. It is consists of various modules and it interacts directly with the underlying hardware. Kernel
provides the required abstraction to hide low level hardware details to system or application programs.
System Library - System libraries are special functions or programs using which application programs
or system utilities accesses Kernel's features. These libraries implements most of the functionalities of
the operating system and do not requires kernel module's code access rights.
System Utility - System Utility programs are responsible to do specialized, individual level tasks
Architecture
1. Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
2. Kernel - Core component of Operating System, interacts directly with hardware, provides
low level services to upper layer components.
3. Shell - An interface to kernel, hiding complexity of kernel's functions from users. Takes
commands from user and executes kernel's functions.
4. Utilities - Utility programs giving user most of the functionalities of an operatingsystems.
Virtualization
Virtualization refers to the act of creating a virtual (rather than actual) version of
something, including a virtual computer hardware platform, operating system (OS),
storage device, or computer network resources.
Hardware Virtualization
Benefits of Virtualization
1. Instead of deploying several physical servers for each service, only one
server can be used. Virtualization let multiple OSs and applications to run
on a server at a time. Consolidate hardware to get vastly higher
productivity from fewer servers.
2. If the preferred operating system is deployed as an image, so we needed to
go through the installation process only once for the entire infrastructure.
3. Improve business continuity: Virtual operating system images allow us
for instant recovery in case of a system failure. The crashed system can be
restored back by coping the virtual image.
4. Increased uptime: Most server virtualization platforms offer a number of
advanced features that just aren't found on physical servers which
increases servers’ uptime. Some of features are live migration, storage
migration, fault tolerance, high availability, and distributed resource
scheduling.
5. Reduce capital and operating costs: Server consolidation can be done
by running multiple virtual machines (VM) on a single physical server.
Fewer servers means lower capital and operating costs.
Architecture - Virtualization
3. Write about LINUX architecture and LINUX kernel with neat sketch. (Nov/Dec
2015)
Shell - Linux provides a special interpreter program which can be used to execute commands
of the operating system.
Security - Linux provides user security using authentication features like password
protection/ controlled access to specific files/ encryption of data.
tasks
Installed components of a Linux system include the following:
A bootloader is a program that loads the Linux kernel into the computer's main memory, by
being executed by the computerwhen it is turned on and after the firmware initialization is
performed.
An init program is the first process launched by the Linux kernel, and is at the root of the
process tree.
Software libraries, which contain code that can be used by running processes. The most
commonly used software library on Linux systems, the GNU C Library (glibc), C standard
library and Widget toolkits.
User interface programs such as command shells or windowing environments. The user
interface, also known as the shell, is either a command-line interface (CLI), a graphical user
interface (GUI), or through controls attached the associated hardware.
Architecture
5. Hardware layer - Hardware consists of all peripheral devices (RAM/ HDD/ CPU etc).
6. Kernel - Core component of Operating System, interacts directly with hardware,
provides low level services to upper layer components.
7. Shell - An interface to kernel, hiding complexity of kernel's functions from users.
Takes commands from user and executes kernel's functions.
8. Utilities - Utility programs giving user most of the functionalities of an operating
systems.
PROCESS MANAGEMENT
A Program does nothing unless its instructions are executed by a CPU. A program in execution
is called a process. In order to accomplish its task, process needs the computer resources.
There may exist more than one process in the system which may require the same resource at the
same time. Therefore, the operating system has to manage all the processes and the resources in a
convenient and efficient way.
Some resources may need to be executed by one process at one time to maintain the consistency
otherwise the system can become inconsistent and deadlock may occur.
The operating system is responsible for the following activities in connection with Process
Management
Attributes of a process
The Attributes of the process are used by the Operating System to create the process control
block (PCB) for each of them. This is also called context of the process. Attributes which are
stored in the PCB are described below.
1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique
identification of the process in the system.
2. Program counter
A program counter stores the address of the last instruction of the process on which the process
was suspended. The CPU uses this address when the execution of this process is resumed.
3. Process State
The Process, from its creation to the completion, goes through various states which are new,
ready, running and waiting. We will discuss about them later in detail.
4. Priority
Every process has its own priority. The process with the highest priority among the processes
gets the CPU first. This is also stored on the process control block.
Every process has its own set of registers which are used to hold the data which is generated
during the execution of the process.
During the Execution, Every process uses some files which need to be present in the main
memory. OS also maintains a list of open files in the PCB.
7. List of open devices
OS also maintain the list of all open devices which are used during the execution of the process.
1. Creation
Once the process is created, it will be ready and come into the ready queue (main memory) and
will be ready for the execution.
2. Scheduling
Out of the many processes present in the ready queue, the Operating system chooses one process
and start executing it. Selecting the process which is to be executed next, is known as scheduling.
3. Execution
Once the process is scheduled for the execution, the processor starts executing it. Process may
come to the blocked or wait state during the execution then in that case the processor starts
executing the other processes.
4. Deletion/killing
Once the purpose of the process gets over then the OS will kill the process. The Context of the
process (PCB) will be deleted and the process gets terminated by the Operating system.
Process Schedulers
Operating system uses various schedulers for the process scheduling described below.
Long term scheduler is also known as job scheduler. It chooses the processes from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long
term scheduler is to choose a perfect mix of IO bound and CPU bound processes among the jobs
present in the pool.
If the job scheduler chooses more IO bound processes then all of the jobs may reside in the
blocked state all the time and the CPU will remain idle most of the time. This will reduce the
degree of Multiprogramming. Therefore, the Job of long term scheduler is very critical and may
affect the system for a very long time.
Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the ready
queue and dispatch to the CPU for the execution.
A scheduling algorithm is used to select which job is going to be dispatched for the execution.
The Job of the short term scheduler can be very critical in the sense that if it selects job whose
CPU burst time is very high then all the jobs after that, will have to wait in the ready queue for a
very long time.
This problem is called starvation which may arise if the short term scheduler makes some
mistakes while selecting the job.
Medium term scheduler takes care of the swapped out processes. If the running state processes
needs some IO time for the completion then there is a need to change its state from running to
waiting.
Medium term scheduler is used for this purpose. It removes the process from the running state to
make room for the other processes. Such processes are the swapped out processes and this
procedure is called swapping. The medium term scheduler is responsible for suspending and
resuming the processes.
It reduces the degree of multiprogramming. The swapping is necessary to have a perfect mix of
processes in the ready queue.
Process Queues
The Operating system manages various types of queues for each of the process states. The PCB
related to the process is also stored in the queue of the same state. If the Process is moved from
one state to another state then its PCB is also unlinked from the corresponding queue and added
to the other state queue in which the transition is made.
1. Job Queue
In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the
primary memory.
2. Ready Queue
Ready queue is maintained in primary memory. The short term scheduler picks the job from the
ready queue and dispatch to the CPU for the execution.
3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes the
state of the process from running to waiting. The context (PCB) associated with the process gets
stored on the waiting queue which will be used by the Processor when the process finishes the
IO.
MEMORY MANAGEMENT
The choice between Static or Dynamic Loading is to be made at the time of computer
program being developed. If you have to load your program statically, then at the time of
compilation, the complete programs will be compiled and linked without leaving any external
program or module dependency. The linker combines the object program with other necessary
object modules into an absolute program, which also includes logical addresses.
If you are writing a dynamically loaded program, then your compiler will compile the program
and for all the modules which you want to include dynamically, only references will be
provided and rest of the work will be done at the time of execution.
At the time of loading, with static loading, the absolute program (and data) is loaded into
memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored on a disk in
relocatable form and are loaded into memory only when they are needed by the program.
Swapping
1
Single-partition allocation
In this type of allocation, relocation-register scheme is used to protect user
processes from each other, and from changing operating-system code and data.
Relocation register contains value of smallest physical address whereas limit
register contains range of logical addresses. Each logical address must be less than
the limit register.
2
Multiple-partition allocation
In this type of allocation, main memory is divided into a number of fixed-sized
partitions where each partition should contain only one process. When a partition is
free, a process is selected from the input queue and is loaded into the free partition.
When the process terminates, the partition becomes available for another process.
Fragmentation
As processes are loaded and removed from memory, the free memory space is broken
into little pieces. It happens after sometimes that processes cannot be allocated to memory
blocks considering their small size and memory blocks remains unused. This problem is known
as Fragmentation.
Fragmentation is of two types −
1
External fragmentation
Total memory space is enough to satisfy a request or to reside a process in it, but it
is not contiguous, so it cannot be used.
2
Internal fragmentation
Memory block assigned to process is bigger. Some portion of memory is left
unused, as it cannot be used by another process.
External fragmentation can be reduced by compaction or shuffle memory contents to place all
free memory together in one large block. To make compaction feasible, relocation should be
dynamic.
The internal fragmentation can be reduced by effectively assigning the smallest partition but
large enough for the process.
7.Explain the architecture of iOS. Discuss the media and service layers clearly.(April/May2019)
Architecture of IOS
Architecture of IOS is a layered architecture. At the uppermost level iOS works as an
intermediary between the underlying hardware and the apps you make. Apps do not
communicate to the underlying hardware directly.
Apps talk with the hardware through a collection of well defined system interfaces. These
interfaces make it simple to write apps that work constantly on devices having various hardware
abilities.
Lower layers gives the basic services which all application relies on and higher level layer gives
sophisticated graphics and interface related services.
Apple provides most of its system interfaces in special packages called frameworks. A
framework is a directory that holds a dynamic shared library that is .a files, related resources like
as header files, images, and helper apps required to support that library. Every layer have a set of
Framework which the developer use to construct the applications.
1. Core OS Layer:
The Core OS layer holds the low level features that most other technologies are built upon.
64-Bit support from IOS7 supports the 64 bit app development and enables the application to run
faster.
2. Core Services Layer
some of the Important Frameworks available in the core services layers are detailed:
Cloud Kit framework – Gives a medium for moving data between your app and iCloud.
Core data Framework – Technology for managing the data model of a Model View
Controller app.
Core Foundation framework – Interfaces that gives fundamental data management and
service features for iOS apps.
Core Motion Framework – Access all motion based data available on a device. Using this
core motion framework Accelerometer based information can be accessed.
Foundation Framework – Objective C covering too many of the features found in the Core
Foundation framework
Homekit framework – New framework for talking with and controlling connected devices
in a user’s home.
Social framework – Simple interface for accessing the user’s social media accounts.
StoreKit framework – Gives support for the buying of content and services from inside
your iOS apps, a feature known asIn-App Purchase.
3. Media Layer: Graphics, Audio and Video technology is enabled using the Media Layer.
Graphics Framework:
UIKit Graphics – It describes high level support for designing images and also used for
animating the content of your views.
Core Graphics framework – It is the native drawing engine for iOS apps and gives support
for custom 2D vector and image based rendering.
Core Animation – It is an initial technology that optimizes the animation experience of your
apps.
Core Images – gives advanced support for controlling video and motionless images in a
nondestructive way
Metal – It permits very high performance for your sophisticated graphics rendering and
computation works. It offers very low overhead access to the A7 GPU.
Audio Framework:
Media Player Framework – It is a high level framework which gives simple use to a user’s
iTunes library and support for playing playlists.
AV Kit – framework gives a collection of easy to use interfaces for presenting video.
Core Media – framework describes the low level interfaces and data types for operating
media.
EventKit framework – gives view controllers for showing the standard system interfaces
for seeing and altering calendar related events
GameKit Framework – implements support for Game Center which allows users share
their game related information online
iAd Framework – allows you deliver banner-based advertisements from your app.
MapKit Framework – gives a scrollable map that you can include into your user interface
of app.
Twitter Framework – supports a UI for generating tweets and support for creating URLs to
access the Twitter service.
UIKit Framework – gives vital infrastructure for applying graphical, event-driven apps in
iOS. Some of the Important functions of UI Kit framework: