0% found this document useful (0 votes)
25 views

Operating System (OS) Complete Notes With Most Imp. QNA

Uploaded by

kaushik.kad7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

Operating System (OS) Complete Notes With Most Imp. QNA

Uploaded by

kaushik.kad7
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Unit 1

PART – A: (Multiple Choice Questions)


2x10=20 Marks

1 The strategy of allowing processes that are logically runnable to be temporarily suspended
is called
A.preemptive scheduling
B. non preemptive scheduling
C. shortest job first
D.first come first served
E. None of the above
v Process is
A. program in High level language kept on disk
B. contents of main memory
C. a program in execution
D.a job in secondary memory
E. None of the above
3 Fork is
A. the dispatching of a task
B. the creation of a new job
C. the creation of a new processv
D. increasing the priority of a task
E. None of the above
4 Interprocess communication
A. is required for all processes
B. is usually done via disk drives
C. is never necessary,
D.allows processes to synchronize activity
5 The FIFO algorithm
A. executes first the job that last entered the queuev
B.executes first the job that first entered the queue
C. execute first the job that has been in the queue the longest
D.executes first the job with the least processor needs
E. None of the above
6 Inter-process communication
A. is required for all processes
B. is usually done via disk drivesv
C. is never necessary,
D.allows processes to synchronize activity

UNIT-1
PART – B: (Short Answer Questions)
1 What are the functions of operating system?
Functions of Operating system:

• Memory Management
• File Management
• Device Management
• I/O management
• Networking
• Security
• Processor Management
• Secondary storage management
• Common interpretation

2 Describe system calls and its type.


System calls provide an interface between the process and the OperatingSystem. System calls allow
user-level processes to request some services from theoperating system which process itself is not
allowed to do.
Types of System Calls Windows Linux

Process Control CreateProcess() fork()


ExitProcess() exit()
WaitForSingleObject() wait()

File Management CreateFile() open()


ReadFile() read()
WriteFile() write()
CloseHandle() close()

Device Management SetConsoleMode() ioctl()


ReadConsole() read()
WriteConsole() write()

Information Maintenance GetCurrentProcessID() getpid()


SetTimer() alarm()
Sleep() sleep()

Communication CreatePipe() pipe()


CreateFileMapping() shmget()
MapViewOfFile() mmap()

3 Explain Booting the system and Bootstrap program in operating system.


The procedure of starting a computer by loading the kernel is known as booting the system.

When a user first turn on or booted the computer, it needs some initial program to run. This initial
program is known as Bootstrap Program. It is stored in read-only memory (ROM) or electrically
erasable programmable read-only memory (EEPROM). Bootstrap program locates the kernel and
loads it into main memory and starts its execution

4 What is an operating system?


An Operating System (OS) is an interface between a computer user and computer
hardware. An operating system is a software which performs all the basic tasks like file
management, memory management, process management, handling input and output, and
controlling peripheral devices such as disk drives and printers.

Some popular Operating Systems include Linux Operating System, Windows Operating
System, VMS, OS/400, AIX, z/OS, etc.

5 What is a Kernel?
Kernel is an active part of an OS i.e., it is the part of OS running at all times.It is a programs which
can interact with the hardware. Ex: Device driver, dll files,system files etc.

Or
Kernel is the core part of an operating system which manages system resources. It also acts like a
bridge between application and hardware of the computer. It is one of the first programs loaded on
start-up (after the Bootloader)

6 Explain the basic functions of process management.


The basic functions of the OS wrt the process management are :

- Allocating resources to processes,


- Enabling processes to share and exchange information,
- Protecting the resources of each process from other processes and
- Enabling synchronization among processes.

7 What do you know about interrupt?


Interrupt is the mechanism by which modules like I/O or memory may interrupt the normal
processing by CPU. It may be either clicking a mouse, dragging a cursor, printing a document etc the
case where interrupt is getting generated.
8 What are the various IPC mechanisms?

Inter process communication (IPC) is used for exchanging data between multiple
threads in one or more processes or programs. The Processes may be running on
single or multiple computers connected by a network. The full form of IPC is Inter-
process communication.

It is a set of programming interface which allow a programmer to coordinate


activities among various program processes which can run concurrently in an
operating system. This allows a specific program to handle many user requests at
the same time.

9 Differentiate between the user mode and monitor mode.


Difference between User and Monitor Mode

• User mode and monitor mode are distinguished by a bit called the mode bit.
• User mode uses bit 1 and monitor mode uses bit 0.
• At the boot time hardware starts with the monitor mode.
• Also, at the time of interrupt user mode is shifted to the transfer mode.
• The system always switches to the user mode before passing control to the user
program.
• Whenever the system gains control of the computer it works in monitor mode
otherwise in user mode.

10 Explain PCB.
• For each process, the operating system maintains the data structure, which keeps the
complete information about that process. This record or data structure is called Process
Control Block (PCB).
• Whenever a user creates a process, the operating system creates the corresponding PCB for
that process. These PCBs of the processes are stored in the memory that is reserved for the
operating system.
• The process control block has many fields that store the relative information about that
process as you can see in the above figure. PCB contains Process-Id, Process State, Process
Priority, Accounting Information, Program Counter, and also some other information which
helps in controlling the operations of the process.

11 What is meant by Batch Systems?


Batch Operating system is one of the important type of operating system.
The users who using a batch operating system do not interact with the computer directly. Each user
prepares its job on an off-line device like punch cards and submits it to the computer operator. To
speed up the processing, jobs with similar needs are batched together and run as a group. The
programmers exit their programs with the operator and the operator then sorts the programs with
similar requirements into batches.
The problems that occurs with Batch Systems are as follows −
• There is a lack of interaction between the user and the job.
• CPU is being often idle, because the speed of the mechanical I/O devices is slower than the
CPU.
• It is difficult to provide the desired priority.
12 What is meant by Multiprogramming?
A multiprogramming is a parallel processing in which the multiple programs can run
simultaneously.

• Multiprogramming is the allocation of more than one concurrent program on a


computer system and its resources.
• Multiprogramming allows using the CPU effectively by allowing various users
to use the CPU and I/O devices effectively.
• Multiprogramming makes sure that the CPU always has something to execute,
thus increases the CPU utilization.

13 What are the advantages of distributed systems?


Advantages of Distributed Systems
Some advantages of Distributed Systems are as follows −

• All the nodes in the distributed system are connected to each other. So nodes
can easily share data with other nodes.
• More nodes can easily be added to the distributed system i.e. it can be scaled
as required.
• Failure of one node does not lead to the failure of the entire distributed
system. Other nodes can still communicate with each other.
• Resources like printers can be shared with multiple nodes rather than being
restricted to just one.

14 What are the applications of real-time systems?


Applications of Real Time Operating System

Real-time systems are used in:

• Airlines reservation system.


• Air traffic control system.
• Systems that provide immediate updating.
• Used in any system that provides up to date and minute information on stock
prices.
• Defense application systems like RADAR.
• Networked Multimedia Systems
• Command Control Systems
• Internet Telephony
• Anti-lock Brake Systems
• Heart Pacemaker
15 What is the use of Fork and Exec System Calls?
fork() is used to create a new process and when Exec() is invoked the program specified in the
parameter to exec()will replace the entire process.
Example of frok()

#include <stdio.h>
#include <sys/types.h>
#include <unistd.h>

int main()
{
fork();
printf("Hello world!\n");
return 0;
}

Example of exec()

Main file:
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
int main()
{
char *args[]={"./EXEC",NULL};
execv(args[0],args);
printf("Ending-----");

return 0;
}

Caller file:
#include<stdio.h>
#include<unistd.h>

int main()
{
int i;

printf("I am EXEC.c called by execv() ");


printf("\n");
return 0;
}

16 What is meant by the state of the process?


A process from its creation to completion goes through various stages in order to
complete the works defined on it. Each of these stages are known as states of a
process.

Different states of processes are

1. New - When a process is creating using fork()


2. Ready - Process has been created but not assigned a processor to run.
3. Running - Process has been assigned a processor and is executing.
4. Waiting - Process is waiting for some event to complete(I/O)
5. Terminated - The process is completed/suspended.

17 What is the function of following UNIX commands: vi, cat, pwd, psw.
cat: Concatenate files and print to stdout.
• Syntax: cat [OPTION]…[FILE]
• Example: Create file1 with entered cotent
• $ cat > file1
• Hello
• ^D
pwd: Print the present working directory
• Syntax: pwd [OPTION]
• Example: Print ‘dir1’ if a current working directory is dir1
• $ pwd

18 Draw a neat process state diagram.


19 What are the 3 different types of scheduling queues?
The Operating System maintains the following important process scheduling queues

• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
• Device queues − The processes which are blocked due to unavailability of an
I/O device constitute this queue.
20 Differentiate between a Multiprogramming System and a Timesharing System.

21 What are the advantages of distributed operating system?

Advantages of distributed operating systems:-


• Give more performance than single system
• If one pc in distributed system malfunction or corrupts then other node or pc
will take care of
• More resources can be added easily
• Resources like printers can be shared on multiple pc’s

22 What are the differences between multiprocessing and multiprogramming?

23 What are the primary differences between Network Operating System and
Distributed Operating System?
UNIT-1
PART – C: (Long Answer Questions)
Explain the various types of System calls with an example for each.
The interface between a process and an operating system is provided by system calls. In general, system
calls are available as assembly language instructions. They are also included in the manuals used by the
assembly level programmers.
System calls are usually made when a process in user mode requires access to a resource. Then it
requests the kernel to provide the resource via a system call.
There are mainly five types of system calls. These are explained in detail as follows –

1)Process Control
These system calls deal with processes such as process creation, process termination etc.

2)File Management
These system calls are responsible for file manipulation such as creating a file, reading a file, writing into
1
a file etc.

3)Device Management
These system calls are responsible for device manipulation such as reading from device buffers, writing
into device buffers etc.

4)Information Maintenance
These system calls handle information and its transfer between the operating system and the user
program.

5)Communication
These system calls are useful for interprocess communication. They also deal with creating and deleting
a communication connection.

Define operating system and list out the function and component of
operating system.
//components aur functions tha definition likhne ka mn kare to likhna nhi too bss list likh do…..

An Operating System (OS) is an interface between a computer user and computer


hardware. An operating system is a software which performs all the basic tasks
like file management, memory management, process management, handling
2 input and output, and controlling peripheral devices such as disk drives and
printers.
Some popular Operating Systems include Linux Operating System, Windows
Operating System etc.
some of important functions of an operating System are:-

• Memory Management
=>Memory management refers to management of Primary Memory or
Main Memory. Main memory is a large array of words or bytes where each
word or byte has its own address.

• Processor Management
=> An Operating System does the following activities for processor management

• Keeps tracks of processor and status of process. The program responsible
for this task is known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.

• Device Management

=> An Operating System manages device communication via their


respective drivers.

• File Management

=> A file system is normally organized into directories for easy navigation
and usage. These directories may contain files and other directions.

• Security

=> By means of password and similar other techniques, it prevents


unauthorized access to programs and data.

• Control over system performance


=> Recording delays between request for a service and response from the
system.
• Job accounting
=>Keeping track of time and resources used by various jobs and users.
Error detecting aids
= >Production of dumps, traces, error messages, and other debugging and
error detecting aids.
• Coordination between other softwares and users
=> Coordination and assignment of compilers, interpreters, assemblers
and other software to the various users of the computer systems.
Components of Operating System
The components of an operating system play a key role to make a variety of
computer system parts work together. The operating components are

1 Kernel

The kernel in the OS provides the basic level of control on all the computer
peripherals.

2 Process Execution

The OS gives an interface between the hardware as well as an application program


so that the program can connect through the hardware device by simply following
procedures & principles configured into the OS

3 Interrupt

In the operating system, interrupts are essential because they give a reliable
technique for the OS to communicate & react to their surroundings.
4 Memory Management

The functionality of an OS is nothing but memory management which manages


main memory & moves processes backward and forward between disk & main
memory during implementation.

5 Multitasking

It describes the working of several independent computer programs on a similar


computer system. Multitasking in an OS allows an operator to execute one or more
computer tasks at a time

6 Networking

Networking can be defined as when the processor interacts with each other
through communication lines. The design of communication-network must
consider routing, connection methods, safety, the problems of opinion & security.

7 Security

If a computer has numerous individuals to allow the immediate process of various


processes, then the many processes have to be protected from other activities.
8 User Interface

A GUI or user interface (UI) is the part of an OS that permits an operator to get the
information. A user interface based on text displays the text as well as its
commands which are typed over a command line with the help of a keyboard.

What do you mean by a Process? How it differs from a Program? Explain


the structure of a Process
Control Block.
The term process (Job) refers to program code that has been loaded into a
computer’s memory so that it can be executed by the central processing unit (CPU).
A process can be described as an instance of a program running on a computer or
as an entity that can be assigned to and executed on a processor. A program
becomes a process when loaded into memory and thus is an active entity.
Difference between Program and Process :
Program contains a set

of instructions Process is an instance

designed to complete a of an executing

1. specific task. program.


A

Process is a active

Program is a passive entity as it is created

entity as it resides in during execution and

the secondary loaded into the main

2. memory. memory.

Program exists at a Process exists for a

3. single place and limited span of time as


continues to exist until it gets terminated after

it is deleted. the completion of task.

Program is a static Process is a dynamic

4. entity. entity.

Program does not have

any resource Process has a high

requirement, it only resource requirement,

requires memory it needs resources like

space for storing the CPU, memory address,

5. instructions. I/O during its lifetime.

Process has its own

Program does not have control block called

6. any control block. Process Control Block.

Process Control Block is a data structure that contains information of the process
related to it. The process control block is also known as a task control block, entry
of the process table, etc.
It is very important for process management as the data structuring for processes
is done in terms of the PCB. It also defines the current state of the operating
system.
Structure of the Process Control Block
The process control stores many data items that are needed for efficient process
management. Some of these data items are explained with the help of the given
diagram −
The following are the data items −
Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
This shows the number of the particular process.
Program Counter
This contains the address of the next instruction that needs to be executed in the
process.
Registers
This specifies the registers that are used by the process. They may include
accumulators, index registers, stack pointers, general purpose registers etc.
List of Open Files
These are the different files that are associated with the process
CPU Scheduling Information
The process priority, pointers to scheduling queues etc. is the CPU scheduling
information that is contained in the PCB. This may also include any other
scheduling parameters.
Memory Management Information
The memory management information includes the page tables or the segment
tables depending on the memory system used. It also contains the value of the
base registers, limit registers etc.
I/O Status Information
This information includes the list of I/O devices used by the process, the list of
files etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are
all a part of the PCB accounting information.
Location of the Process Control Block
The process control block is kept in a memory area that is protected from the
normal user access. This is done because it contains important process
information. Some of the operating systems place the PCB at the beginning of the
kernel stack for the process as it is a safe location.

Differentiate between long term scheduler and short term scheduler. What
is the purpose of medium term scheduler?
Difference Between Long-Term and Short-Term Scheduler:

Long-Term Scheduler takes the Short-Term Scheduler takes the

1. process from job pool. process from ready queue.

Long-Term Scheduler is also Short-Term Scheduler is also

2. known as Job Scheduler. known as CPU Scheduler.

In Long-Term Scheduler, the

B programs are setup in the queue

and as per the requirement the In Short-Term Scheduler no

3. best one job is selected. such queue is exist.

It regulates the more DOM It regulates the less DOM

4. (Degree of Multi-programming). (Degree of Multi-programming).

It regulates the programs which It ensures which program is

are selected to system for suitable or important for

5. processing. processing.
Speed is less than the short-term Speed is very fast as compared

6. scheduler. to long-term scheduler.

Long-Term Scheduler changes Short-Term Scheduler changes

the process state the process state

7. from New to Ready. from Ready to Running.

Time-sharing operating systems It may be minimal in time-

8. have no long-term scheduler. sharing system.

It select a good process, mix of It select a new process for a CPU

9. I/O bound and CPU bound. quite frequently.

Describe message passing and shared memory technique in IPC system.


1. Shared Memory Model :
In this IPC model, a shared memory region is established which is used by the
processes for data communication. This memory region is present in the address
space of the process which creates the shared memory segment.The processes
who want to communicate with this process should attach this memory segment
A into their address space.
2. Message Passing Model :
In this model, the processes communicate with each other by exchanging
messages. For this purpose a communication link must exist between the
processes and it must facilitate at least two operations send (message) and
receive (message). Size of messages may be variable or fixed.

Describe briefly about different types of operating system


An Operating System performs all the basic tasks like managing file,process, and
memory. Thus operating system acts as manager of all the resources, i.e. resource
manager. Thus operating system becomes an interface between user and
B machine.
Types of Operating Systems: Some of the widely used operating systems are as
follows-
1. Batch Operating System –
This type of operating system does not interact with the computer directly. There
is an operator which takes similar jobs having same requirement and group them
into batches. It is the responsibility of operator to sort the jobs with similar needs.
2Time-SharingOperatingSystems:-
Each task is given some time to execute, so that all the tasks work smoothly. Each
user gets time of CPU as they use single system. These systems are also known as
Multitasking Systems. The task can be from single user or from different users also.
The time that each task gets to execute is called quantum.

3.DistributedOperatingSystem –
These types of operating system is a recent advancement in the world of
computer technology and are being widely accepted all-over the world and,
that too, with a great pace. Various autonomous interconnected computers
communicate each other using a shared communication network.
Independent systems possess their own memory unit and CPU.
4.NetworkOperatingSystem –
These systems run on a server and provide the capability to manage data, users,
groups, security, applications, and other networking functions. These type of
operating systems allow shared access of files, printers, security, applications, and
other networking functions over a small private network

5. Real-Time Operating System –


These types of OSs serves the real-time systems. The time interval required to
process and respond to inputs is very small. This time interval is called response
time.

Explain briefly about the services of operating system.


An Operating System provides services to both the users and to the programs.

• It provides programs an environment to execute.


• It provides users the services to execute the programs in a convenient
manner.

A
Following are a few common services provided by an operating system −

• Program execution

=> Operating systems handle many kinds of activities from user programs
to system programs like printer spooler, name servers, file server, etc.
Each of these activities is encapsulated as a process.

• I/O operations
=> An I/O subsystem comprises of I/O devices and their corresponding
driver software. Drivers hide the peculiarities of specific hardware devices
from the users.

• File System manipulation

=> A file represents a collection of related information. Computers can


store files on the disk (secondary storage), for long-term storage purpose.
Examples of storage media include magnetic tape, magnetic disk and
optical disk drives like CD, DVD. Each of these media has its own
properties like speed, capacity, data transfer rate and data access
methods.Communication

• Error Detection

=> Errors can occur anytime and anywhere. An error may occur in CPU, in
I/O devices or in the memory hardware. Following are the major activities
of an operating system with respect to error handling −

• Resource Allocation

=> In case of multi-user or multi-tasking environment, resources such as


main memory, CPU cycles and files storage are to be allocated to each user
or job

• Protection
=> Considering a computer system having multiple users and concurrent
execution of multiple processes, the various processes must be protected from
each other's activities.

Describe the structure of operating system.


The operating system sits somewhere between the hardware and other software. To ensure
smooth administration, the operating system is structured differently than most other
programs. Usually, the systems consist of different layers. The base layer, the one furthest
removed from the user interface, is at the core of the OS and is its most important element.
That’s why this part is usually loaded first. The core presents a direct interface to the
hardware, because it initializes and passes on commands between programs and the
B hardware.

All other layers are assembled on top of the core element and are progressively removed from
interacting with the hardware. Each level communicates with the layer above or below it. At the
top is the user interface which presents the interface between the user and the software. When
a user executes a task, the command is transmitted through the different layers until it reaches
the correct one, for example, the processor.
Unit 2

PART – A: (Multiple Choice Questions) 2x10=20 Marks

1 Which module gives control of the CPU to the process selected by the short-term scheduler?
a) dispatcher b) interrupt c) scheduler d) none of the mentioned

2 The interval from the time of submission/response of a process to the time of completion is termed as
a) waiting time b) turnaround time c) response time d) throughput
3 In priority scheduling algorithm, when a process arrives at the ready queue, its priority is compared
with the priority of: a) all process b) currently running process c) parent process d) init process

4 Which one of the following is a system call?


a) process control b) process scheduling c) process d) none of the mentioned

5 The number of process completed in unit of time is known as :


a) Response time. b) Turnaround time. c) Throughput. d) None of the above

6 Which module gives control of the CPU to the process selected by the short-term scheduler?
A. dispatcher
B. interrupt
C. scheduler
D. none of the mentioned
7 The processes that are residing in main memory and are ready and waiting to execute are kept on a list called:

A. job queue
B. ready queue
C. execution queue
D. process queue

8 The interval from the time of submission of a process to the time of completion is termed as:

A. waiting time
B. turnaround time
C. response time
D. throughput

9 In priority scheduling algorithm:

A. CPU is allocated to the process with highest priority


B. CPU is allocated to the process with lowest priority
C. equal priority processes cannot be scheduled
D. none of the mentioned
10 In multilevel feedback scheduling algorithm:

A. a process can move to a different classified ready queue


B. classification of ready queue is permanent
C. processes are not classified into groups
D. none of the mentioned

11 With multiprogramming, ______ is used productively.

A. time
B. space
C. money
D. All of these

12 An I/O bound program will typically have :

A. a few very short CPU bursts


B. many very short I/O bursts
C. many very short CPU bursts
D. a few very short I/O bursts

13 Round robin scheduling falls under the category of :

A. Non preemptive scheduling


B. Preemptive scheduling
C. None of these

14 Which of the following algorithms tends to minimize the process flow time ?

A. First come First served


B. Shortest Job First
C. Earliest Deadline First
D. Longest Job First

15 Under multiprogramming, turnaround time for short jobs is usually ________ and that for long jobs is slightly
___________.

A. Lengthened; Shortened
B. Shortened; Lengthened
C. Shortened; Shortened
D. Shortened; Unchanged
UNIT-2
PART – B: (Short Answer Questions)

1 Differentiate between preemptive and non-preemptive scheduling.


S.NO. PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

The CPU is allocated to the process till it


The CPU is allocated to the processes for a certain ends it’s execution or switches to waiting
1. amount of time. state.
The executing process here is interrupted in the The executing process here is not
2. middle of execution. interrupted in the middle of execution.
It usually switches the process from ready state to
running state and vise-versa, and maintains the ready It does not switch the process from running
3. queue. state to ready state.
Here, if a process with high priority frequently
arrives in the ready queue then the process with low Here, if CPU is allocated to the process
priority has to wait for long, and it may have to with larger burst time then the processes
4. starve. with small burst time may have to starve.
It is quite flexible because the critical processes are It is rigid as even if a critical process enters
allowed to access CPU as they arrive into the ready the ready queue the process running CPU is
5. queue, no matter what process is executing currently. not disturbed.
2 What is starvation? How it is resolved.
Priority scheduling can suffer from a major problem known as indefinite
blocking, or starvation, in which a low-priority task can wait forever because
there are always some other jobs around that have higher priority.
If this problem is allowed to occur, then processes will either run eventually
when the system load lightens or will eventually get lost when the system is shut down or crashes.

One common solution to this problem is aging, in which priorities of jobs


increase the longer they wait. Under this scheme a low-priority job will
eventually get its priority raised high enough that it gets run
3 Define and differentiate between preemptive and non-preemptive scheduling.

Pre-emptive Scheduling means once a process started its execution, the currently running
process can be paused for a short period of time to handle some other process of higher
priority, it means we can pre-empt the control of CPU from one process to another if
required.

Non-Pre-emptive Scheduling means once a process starts its execution or the CPU is
processing a specific process, it cannot be halted or in other words we cannot preempt
(take control) the CPU to some other process.

4 What do you mean by processor affinity? How it is helpful in multiprocessing scheduling?


Processor affinity, or CPU pinning, enables the binding and unbinding of a process or a thread to a central
processing unit (CPU) or a range of CPUs, so that the process or thread will execute only on the designated CPU
or CPUs rather than any CPU.

5 Differentiate between turnaround time and waiting time.


TURN AROUND TIME WAITING TIME

The time since the process entered into


ready queue for execution till the The time process spent in the ready queue and for I/O
process completed it’s execution. completion.
CPU Scheduling Algorithm doesn’t affect the amount of time
Different CPU Scheduling algorithms during which a process executes or does I/O but only the
produce different TAT for the same set amount of time that a process spends waiting in the ready
of processes. queue.
The turnaround time is generally
limited by the speed of the output
device. Waiting time has no such major effect.
6 Draw a neat process state diagram.

7 What do you mean by dispatch latency?


The dispatcher is the module that gives control of the CPU to the
process selected by the scheduler. The dispatcher needs to be as fast as possible, as it is run on every context
switch. The time consumed by the dispatcher is known
as dispatch latency
8 Differentiate between a process and a program.
PROGRAM PROCESS

Program contains a set of instructions designed to


complete a specific task. Process is an instance of an executing program.
Program is a passive entity as it resides in the Process is a active entity as it is created during
secondary memory. execution and loaded into the main memory.
Program exists at a single place and continues to Process exists for a limited span of time as it gets
exist until it is deleted. terminated after the completion of task.
Program is a static entity. Process is a dynamic entity.
Program does not have any resource requirement, it Process has a high resource requirement, it needs
only requires memory space for storing the resources like CPU, memory address, I/O during its
instructions. lifetime.
Process has its own control block called Process
Program does not have any control block. Control Block.
9 What do you mean by critical section?
Critical section is a code segment that can be accessed by only one process at a time. Critical section contains
shared variables which need to be synchronized to maintain consistency of data variables.
10 What do you mean by mutual exclusion?
If a process is executing in its critical section, then no other process is allowed to execute in the critical section.
11 What are the different scheduling criteria for CPU scheduling?
• CPU Utilization
• Throughput
• Turnaround Time
• Waiting time
• Response time
12 What is semaphore? What operations can be performed on a semaphore? semaphore is a variable which
can hold only a non-negative Integer
value, shared between all the threads.
wait and signal operations can be performed on a semaphore.
13 What requirement is to be satisfied for a solution of a critical section problem?
• Mutual Exclusion : If a process is executing in its critical section, then no other process is allowed to
execute in the critical section.
• Progress : If no process is in the critical section, then no other process from outside can block it from
entering the critical section.
• Bounded Waiting : A bound must exist on the number of times that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that request
is granted.

14 What is the use of cooperating processes?


• Modularity: Modularity involves dividing complicated tasks into smaller subtasks
• Information Sharing: Sharing of information between multiple processes can be accomplished using
cooperating processes
• Convenience: There are many tasks that a user needs to do such as compiling, printing, editing etc. It is
convenient if these tasks can be managed by cooperating processes.
• Computation Speedup: Subtasks of a single task can be performed parallely using cooperating
processes

15 What are the requirements that a solution to the critical section problem must
satisfy?
(Answer same as NO.13)
16 What are the benefits of multithreaded programming?
Responsiveness: Program responsiveness allows a program to run even if part of it is blocked using
multithreading.
Resource sharing: hence allowing better utilization of resources.
Economy: Creating and managing threads becomes easier.
Scalability: One thread runs on one CPU. In Multithreaded processes, threads can be distributed over a series of
processors to scale.

UNIT-2
PART – C: (Long Answer Questions)

Differentiate between Preemptive and Non-Preemptive scheduling. [5]


A
Preemtive Scheduling Non-Preemtive Scheduling
Consider the following set of processes with their CPU Burst time, arrival time given in milliseconds and
priority.

Draw three Gantt charts for execution of the processes using SRTF, RR (Time quantum=2) and preemptive
priority scheduling. Separately compute average waiting time and average turnaround time of the processes on
execution of the three algorithms. [10]
Explain Peterson’s solution on critical section problem.
Peterson’s solution is a software based solution to the critical section problem. Consider two processes P0 and P1. For
convenience, when presenting Pi, we use Pi to denote the other process; that is, j == 1 - i.
The processes share two variables:
boolean flag [2] ;
A
int turn;
Initially flag [0] = flag [1] = false, and the value of turn is immaterial (but is either
0 or 1). The structure of process Pi is shown below.
do{
flag[i]=true
turn=j
while(flag[j] && turn==j);
critical section
flag[i]=false
Remainder section
}while(1);
To enter the critical section, process Pi first sets flag [il to be true and then sets turn to the value j, thereby asserting that
if the other process wishes to enter the critical section it can do so. If both processes try to enter at the same time, turn
will be set to both i and j at roughly the same time. Only one of these assignments will last; the other will occur, but
will be overwritten immediately. The eventual value of turn decides which of the two processes is allowed to enter its
critical section first.
We now prove that this solution is correct. We need to show that:
1. Mutual exclusion is preserved,
2. The progress requirement is satisfied,
3. The bounded-waiting requirement is met.
To prove property 1, we note that each Pi enters its critical section only if either flag [jl == false or turn == i. Also note
that, if both processes can be executing in their critical sections at the same time, then flag [i] ==flag [jl == true. These
two observations imply that P0 and P1 could not have successfully executed their while statements at about the same
time, since the value of turn can be either 0 or 1, but cannot be both. Hence, one of the processes say Pj-must have
successfully executed the while statement, whereas Pi had to execute at least one
additional statement ("turn == j"). However, since, at that time, flag [j] == true, and turn == j, and this condition will
persist as long as Pi is in its critical section, the result follows:
To prove properties 2 and 3, we note that a process Pi can be prevented from entering the critical section only if it is
stuck in the while loop with the condition flag [j] == true and turn == j; this loop is the only one. If Pi is not ready to
enter the critical section, then flag [ j ] == false and Pi can enter its critical section. If Pi has set flag[j] to true and is also
executing in its while statement, then either turn == I or turn == j. If turn == i, then Pi will enter the critical section. If
turn == j, then Pi will enter the critical section. However, once Pi exits its critical section, it will reset flag [ jl to false,
allowing Pi to enter its critical section. If Pi resets flag [ j 1 to true, it must also set turn to i. Thus, since Pi does not
change the value of the variable turn while executing the while statement, Pi will enter the critical section (progress)
after at most one entry by Pi (bounded waiting).

What do you mean by Semaphore? Discuss the Dining Philosophers problem using semaphore.
A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic
operations: wait and signal. These operations were originally termed P (for wait; from the Dutch proberen, to test) and
V (for signal; from verhogen, to increment). The classical definition of wait in pseudocode is
wait(S) {
while (S <= 0)
B
; // no-op
S --;
}
The classical definitions of signal in pseudocode is
Signal(S){
S++;
}
The Dining Philosopher Problem – The Dining Philosopher Problem states that K philosophers seated around a circular
table with one chopstick between each pair of philosophers. There is one chopstick between each philosopher. A
philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be picked up by any one of
its adjacent followers but not both.

Dining-Philosophers Problem
• The structure of Philosopher i:
process P[i]
while true do
{ THINK;
PICKUP(CHOPSTICK[i], CHOPSTICK[(i+1)%5]);
EAT;
PUTDOWN(CHOPSTICK[i], CHOPSTICK[(i+1)%5])
}
There are three states of philosopher : THINKING, HUNGRY and EATING. Here there are two semaphores : Mutex
and a semaphore array for the philosophers.

What do you mean by binary semaphore and counting semaphore? Explain implementation of wait () and signal.

A Counting Semaphore:This type of Semaphore uses a count that helps task to be acquired or released numerous times.
If the initial count = 0, the counting semaphore should be created in the unavailable state. The value of the Counting
Semaphore can ranges over an unrestricted domain.
Binary Semaphore:The binary semaphores are quite similar to counting semaphores, but their value is restricted to 0
and 1. In this type of semaphore, the wait operation works only if semaphore = 1, and the signal operation succeeds
when semaphore= 0. It is easy to implement than counting semaphores.

Wait( ) Operation:This type of semaphore operation helps you to control the entry of a task into the critical section.
However, If the value of wait is positive, then the value of the wait argument X is decremented. In the case of
negative or zero value, no operation is executed. It is also called P(S) operation.After the semaphore value is
decreased, which becomes negative, the command is held up until the required conditions are satisfied.

Implementation of wait:
wait (S){
value--;
if (value < 0) {
add this process to waiting queue
block(); }
}

Signal operation:This type of Semaphore operation is used to control the exit of a task from a critical section. It helps
to increase the value of the argument by 1, which is denoted as V(S).

Implementation of Signal:
Signal (S){
value++;
if (value <= 0) {
remove a process P from the waiting queue
wakeup(P); }
Consider the following five processes, with the length of the CPU burst time given in milliseconds.

Process Burst time P1 - 10, P2 - 29, P3 - 3, P4 – 7, P5 - 12 Consider the First come First serve (FCFS), Non
Preemptive Shortest Job First (SJF), Round Robin (RR) with (quantum=10ms) scheduling algorithms. Illustrate
the scheduling using Gantt chart.
• Which algorithm will give the minimum average waiting time?
Ans:The Gantt-chart for FCFS scheduling is
P1 P2 P3 P4 P5
0 10 39 42 49
61
Turnaround time = Finished Time – Arrival Time
Turnaround time for process P1 = 10 – 0 = 10
Turnaround time for process P2 = 39 – 0 = 39
Turnaround time for process P3 = 42 – 0 = 42
Turnaround time for process P4 = 49 – 0 = 49
Turnaround time for process P5 = 61 – 0 = 61
Average Turnaround time = (10+39+42+49+61)/5 = 40.2
The Gantt-chart for SJF scheduling is
P3 P4 P1 P5 P2
0 3 10 20 32
B
61
Turnaround time for process P1 = 3 – 0 = 3
Turnaround time for process P2 = 10 – 0 = 10
Turnaround time for process P3 = 20 – 0 = 20
Turnaround time for process P4 = 32 – 0 = 32
Turnaround time for process P5 = 61 – 0 = 61
Average Turnaround time = (3+10+20+32+61)/5 = 25.2
The Gantt-chart for RR scheduling is
P1 P2 P3 P4 P5 P2 P5
P2
0 10 20 23 30 40
50 52 61
Turnaround time for process P1 = 10 – 0 = 10
Turnaround time for process P2 = 61 – 0 = 61
Turnaround time for process P3 = 23 – 0 = 23
Turnaround time for process P4 = 30 – 0 = 30
Turnaround time for process P5 = 52 – 0 = 52
Average Turnaround time = (10+61+23+30+52)/5 = 44.2
So SJF gives minimum turnaround time.

Explain the different criteria used in operating system during scheduling and what importance they are having
A in choosing the optimal algorithm for a given snapshot?
Scheduling Criteria
1. CPU utilisation –
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilisation can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the load upon the system.

2. Throughput –
A measure of the work done by CPU is the number of processes being executed and completed per unit
time. This is called throughput. The throughput may vary depending upon the length or duration of
processes.

3. Turnaround time –
For a particular process, an important criteria is how long it takes to execute that process. The time
elapsed from the time of submission of a process to the time of completion is known as the turnaround
time. Turn-around time is the sum of times spent waiting to get into memory, waiting in ready queue,
executing in CPU, and waiting for I/O.

4. Waiting time –
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in the ready
queue.

5. Response time –
In an interactive system, turn-around time is not the best criteria. A process may produce some output
fairly early and continue computing new results while previous results are being output to the user.
Thus another criteria is the time taken from submission of the process of request until the first response
is produced. This measure is called response time.

Explore the Reader’s Writer’s problem and Producer Consumer problem by using Semaphore.
Reader’s Writer’s Problem:
→A data set is shared among a number of concurrent processes
– Readers – only read the data set, do not perform any updates
– Writers – can both read and write the data set (perform the updates).
• If two readers read the shared data simultaneously, there will be no problem. If both a reader(s)
and writer share the same data simultaneously then there will be a problem.
• In the solution of reader-writer problem, the reader process share the following data structures:
B Semaphore Mutex, wrt;
int readcount;
• Where →Semaphore mutex is initialized to 1.
→ Semaphore wrt is initialized to 1.
→ Integer readcount is initialized to 0.

The structure of a writer process:


while (true) {
wait (wrt) ;
// writing is performed
signal (wrt) ;
}
The structure of a reader process:
while (true) {
wait (mutex) ;
readcount ++ ;
if (readcount == 1) wait (wrt) ;
signal (mutex)
// reading is performed
wait (mutex) ;
readcount - - ;
if (readcount == 0) signal (wrt) ;
signal (mutex) ;
}
Producer Consumer Problem:
The producer consumer problem is a synchronization problem. There is a fixed size buffer and the producer produces
items and enters them into the buffer. The consumer removes the items from the buffer and consumes them.
A producer should not produce items into the buffer when the consumer is consuming an item from the buffer and vice
versa. So the buffer should only be accessed by the producer or consumer at a time.The producer consumer problem
can be resolved using semaphores.
Producer Process:

do {
.
. PRODUCE ITEM
.
wait(empty);
wait(mutex);
.
. PUT ITEM IN BUFFER
.
signal(mutex);
signal(full);
} while(1);

In the above code, mutex, empty and full are semaphores. Here mutex is initialized to 1, empty is initialized to n
(maximum size of the buffer) and full is initialized to 0.
The mutex semaphore ensures mutual exclusion. The empty and full semaphores count the number of empty and full
spaces in the buffer.
After the item is produced, wait operation is carried out on empty. This indicates that the empty space in the buffer has
decreased by 1. Then wait operation is carried out on mutex so that consumer process cannot interfere.
After the item is put in the buffer, signal operation is carried out on mutex and full. The former indicates that consumer
process can now act and the latter shows that the buffer is full by 1.
Consumer Process:

do {

wait(full);
wait(mutex);
..
. REMOVE ITEM FROM BUFFER
.
signal(mutex);
signal(empty);
.
. CONSUME ITEM
.
} while(1);

The wait operation is carried out on full. This indicates that items in the buffer have decreased by 1. Then wait operation
is carried out on mutex so that producer process cannot interfere.
Then the item is removed from buffer. After that, signal operation is carried out on mutex and empty. The former
indicates that consumer process can now act and the latter shows that the empty space in the buffer has increased by 1.
What is a thread? Discuss and differentiate between user level and Kernel level thread with their advantages and
disadvantages. What are the different thread models we are having? Explain them in detail. .
[8]
Ans: A thread is a single sequence stream within in a process. It is also called lightweight processes. In a process,
threads allow multiple executions of streams. The CPU switches rapidly back and forth among the threads giving
illusion that the threads are running in parallel. A thread can be in any of several states (Running, Blocked, Ready or
Terminated). An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or
consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like
processes as a result threads shares with other threads their code section, data section, OS resources such as open files
and signals.
User - Level Threads
The user-level threads are implemented by users and the kernel is not aware of the existence of these threads. It handles
them as if they were single-threaded processes. User-level threads are small and much faster than kernel level threads.
They are represented by a program counter(PC), stack, registers and a small process control block. Also, there is no
kernel involvement in synchronization for user-level threads.
Advantages of User-Level Threads
Some of the advantages of user-level threads are as follows −

•User-level threads are easier and faster to create than kernel-level threads. They can also be more easily
managed.
A
• User-level threads can be run on any operating system.
• There are no kernel mode privileges required for thread switching in user-level threads.

Disadvantages of User-Level Threads


Some of the disadvantages of user-level threads are as follows −

• Multithreaded applications in user-level threads cannot use multiprocessing to their advantage.


• The entire process is blocked if one user-level thread performs blocking operation.
Kernel-Level Threads
Kernel-level threads are handled by the operating system directly and the thread management is done by the kernel. The
context information for the process as well as the process threads is all managed by the kernel. Because of this, kernel-
level threads are slower than user-level threads.
Advantages of Kernel-Level Threads
Some of the advantages of kernel-level threads are as follows −

• Multiple threads of the same process can be scheduled on different processors in kernel-level threads.
• The kernel routines can also be multithreaded.
• If a kernel-level thread is blocked, another thread of the same process can be scheduled by the kernel.
Disadvantages of Kernel-Level Threads
Some of the disadvantages of kernel-level threads are as follows −

• A mode switch to kernel mode is required to transfer control from one thread to another in a process.
• Kernel-level threads are slower to create as well as manage as compared to user-level threads.

Thread Models:
Many-To-One Model
• In the many-to-one model, many user-level threads are all mapped onto a single kernel thread.
• Thread management is handled by the thread library in user space, which is very efficient.
• However, if a blocking system call is made, then the entire process blocks, even if the other user threads
would otherwise be able to continue.
• Because a single kernel thread can operate only on a single CPU, the many-to-one model does not allow
individual processes to be split across multiple CPUs.
• Green threads for Solaris and GNU Portable Threads implement the many-to-one model in the past, but few
systems continue to do so today.

One-To-One Model:
• The one-to-one model creates a separate kernel thread to handle each user thread.
• One-to-one model overcomes the problems listed above involving blocking system calls and the splitting
of processes across multiple CPUs.
• However the overhead of managing the one-to-one model is more significant, involving more overhead
and slowing down the system.
• Most implementations of this model place a limit on how many threads can be created.
• Linux and Windows from 95 to XP implement the one-to-one model for threads.

Many-To-Many Model:
• The many-to-many model multiplexes any number of user threads onto an equal or smaller number of kernel
threads, combining the best features of the one-to-one and many-to-one models.
• Users have no restrictions on the number of threads created.
• Blocking kernel system calls do not block the entire process.
• Processes can be split across multiple processors.
• Individual processes may be allocated variable numbers of kernel threads, depending on the number of CPUs
present and other factors.

What is the purpose of CPU Scheduling? Mention various scheduling criteria’s. Explain in brief various CPU
scheduling algorithm.
CPU Scheduler’s main objective is to increase system performance in accordance with the chosen set of criteria. It is
the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are
ready to execute and allocates CPU to one of them.
Various Scheduling Criteria:
1. CPU utilization- Keep the CPU as busy as possible.
2. Throughput – Number of processes that complete their execution per
time unit
3. Turnaround Time-The interval from the time of submission of a process
to the time of completion. Turnaround Time is the sum of the periods
spent waiting to get into memory, waiting in the ready queue, executing
on the CPU, and doing I/0 waiting time.
B
4. Waiting Time-sum of the periods spent waiting in the ready queue.
5. Response Time- – amount of time it takes from when a request was
submitted until the first response is produced, not output (for time-
sharing environment)
Various CPU Scheduling Algorithm:
First Come First Serve
First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU scheduling algorithm. In this type
of algorithm, the process which requests the CPU gets the CPU allocation first. This scheduling method can be managed
with a FIFO queue.

As the process enters the ready queue, its PCB (Process Control Block) is linked with the tail of the queue. So, when
CPU becomes free, it should be assigned to the process at the beginning of the queue.
Characteristics of FCFS method:
It offers non-preemptive and pre-emptive scheduling algorithm.
Jobs are always executed on a first-come, first-serve basis
It is easy to implement and use.
However, this method is poor in performance, and the general wait time is quite high.
Shortest Remaining Time
The full form of SRT is Shortest remaining time. It is also known as SJF preemptive scheduling. In this method, the
process will be allocated to the task, which is closest to its completion. This method prevents a newer ready state process
from holding the completion of an older process.

Characteristics of SRT scheduling method:


This method is mostly applied in batch environments where short jobs are required to be given preference.
This is not an ideal method to implement it in a shared system where the required CPU time is unknown.
Associate with each process as the length of its next CPU burst. So that operating system uses these lengths, which
helps to schedule the process with the shortest possible time.
Priority Based Scheduling
Priority scheduling is a method of scheduling processes based on priority. In this method, the scheduler selects the tasks
to work as per the priority.
Priority scheduling also helps OS to involve priority assignments. The processes with higher priority should be carried
out first, whereas jobs with equal priorities are carried out on a round-robin or FCFS basis. Priority can be decided based
on memory requirements, time requirements, etc.

Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes from the round-robin
principle, where each person gets an equal share of something in turn. It is mostly used for scheduling algorithms in
multitasking. This algorithm method helps for starvation free execution of processes.
Characteristics of Round-Robin Scheduling
Round robin is a hybrid model which is clock-driven
Time slice should be minimum, which is assigned for a specific task to be processed. However, it may vary for different
processes.
It is a real time system which responds to the event within a specific time limit.
Shortest Job First
SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the shortest execution time
should be selected for execution next. This scheduling method can be preemptive or non-preemptive. It significantly
reduces the average waiting time for other processes awaiting execution.
Characteristics of SJF Scheduling
It is associated with each job as a unit of time to complete.
In this method, when the CPU is available, the next process or job with the shortest completion time will be executed
first.
It is Implemented with non-preemptive policy.
This algorithm method is useful for batch-type processing, where waiting for jobs to complete is not critical.
It improves job output by offering shorter jobs, which should be executed first, which mostly have a shorter turnaround
time.
Multiple-Level Queues Scheduling
This algorithm separates the ready queue into various separate queues. In this method, processes are assigned to a queue
based on a specific property of the process, like the process priority, size of the memory, etc.
However, this is not an independent scheduling OS algorithm as it needs to use other types of algorithms in order to
schedule the jobs.
Characteristic of Multiple-Level Queues Scheduling:
Multiple queues should be maintained for processes with some characteristics.
Every queue may have its separate scheduling algorithms.
Priorities are given for each queue.
UNIT 3 COMPLETE SOLVED
1 Thrashing
A.is a natural consequence of virtual memory systems
B. can always be avoided by swapping
C. always occurs on large computers
D.can be caused by poor paging algorithms
E. None of the above
2 Memory
A.is a device that performs a sequence of operations specified by instructions in memory.
B. is the device where information is stored
C. is a sequence of instructions
is typically characterized by interactive processing and time-slicing of the CPU's time to allow quick
D.
response to each user.
E. None of the above
3 The principle of locality of reference justifies the use of
A.reenterable
B. non reusable
C. virtual memory
D.cache memory
E. None of the above
4 Thrashing can be avoided if
A.the pages, belonging to the working set of the programs, are in main memory
B. the speed of CPU is increased
C. the speed of I/O processor is increased
D.all of the above
E. None of the above
5 Fragmentation of the file system
A.occurs only if the file system is used improperly
B. can always be prevented
C.can be temporarily removed by compaction
D.is a characteristic of all file systems
E. None of the above
6 The memory allocation scheme subject to "external" fragmentation is
A.segmentation
B. swapping
C. pure demand paging
D.multiple contiguous fixed partitions
E. None of the above
7 Page stealing
A.is a sign of an efficient system
B. is taking page frames from other working sets
C. should be the tuning goal
D.is taking larger disk spaces for pages paged out
E. None of the above
8 A page fault
A.is an error is a specific page
B. occurs when a program accesses a page of memory
C.is an access to a page not currently in memory
D.is a reference to a page belonging to another program
E. None of the above
9 Which of the following statements is false?
A.a small page size causes large page tables
B. internal fragmentation is increased with small pages
C. a large page size causes instructions and data that will not be referenced brought into primary storage
D.I/O transfers are more efficient with large pages
E. None of the above
10 An impulse turbine is used for
a. Low head of water
b. High head of water
c. Medium head of water
d. High discharge
11 The address of a page table in memory is pointed by:
a. stack pointer
b. page table base register
c. page register
d. program counter

12 What is compaction?
a. a technique for overcoming internal fragmentation
b. a paging technique
c. a technique for overcoming external fragmentation
d. a technique for overcoming fatal error

13 With relocation and limit registers, each logical address must be _______ the limit
register.
a. Less than
b. equal to
c. greater than
d. None of these

14 The first fit, best fit and worst fit are strategies to select a ______.
a. Process from a queue to put in memory
b. Processor to run the next process
c. Free hole from a set of available holes
d. All of these

15 A system is in the safe state if:


a. The system can allocate resources to each process in some order and still avoid a deadlock
b. There exist a safe sequence
c. both (a) and (b)
d. None of the mentioned

16 Banker's algorithm for resource allocation deals with


A. deadlock prevention
B. deadlock avoidance
C. deadlock recovery
D. mutual exclusion
E. None of the above
17 Which policy replace a page if it is not in the favored subset of a process's pages?
A. FIFO

B. LRU

C. LFU

D. Working set

E. None of the above


1 What is the function of a lazy swapper?
In demand paging, a page is brought into the memory for its execution only when it is demanded. Demand Paging uses a lazy
swapper to swap the pages from memory. An important feature of a lazy swapper is that it never swaps until that page is
needed.
2 What do you mean by thrashing? What is its cause?
If the process does not have number of frames it needs to support pages in active use, it will quickly page-fault. The high
paging activity is called thrashing.
In other words we can say that when page fault ratio decreases below level, it is called thrashing.
Causes of Thrashing
It result in severe performance problems.
1) If CPU utilization is too low then we increase the degree of multiprogramming by introducing a new process to the system.
A global page replacement algorithm is used. The CPU scheduler sees the decreasing CPU utilization and increases the degree
of multiprogramming.
2) CPU utilization is plotted against the degree of multiprogramming.
3) As the degree of multiprogramming increases, CPU utilization also increases.
4) If the degree of multiprogramming is increased further, thrashing sets in and CPU utilization drops sharply.
5) So, at this point, to increase CPU utilization and to stop thrashing, we must decrease the degree of multiprogramming.
3 Distinguish between logical address space and physical address space.
COMPARISON LOGICAL ADDRESS PHYSICAL ADDRESS

Basic It is the virtual address generated by CPU The physical address is a location in a

memory unit.

Address Space Set of all logical addresses generated by Set of all physical addresses mapped to the

CPU in reference to a program is referred corresponding logical addresses is referred

as Logical Address Space. as Physical Address.

Visibility The user can view the logical address of The user can never view physical address

a program. of program
Access The user uses the logical address to The user can not directly access physical

access the physical address. address.

Generation The Logical Address is generated by the Physical Address is Computed by MMU

CPU

4 What is the purpose of valid / invalid bit in demand paging?


In demand paging, only the pages that are required currently are brought into the main memory.Assume that a process has 5
pages: A, B, C, D, E and A and B are in the memory. With the help of valid-invalid bit, the system can know, when required,
that pages C, D and E are not in the memory.

In short: a 1 in valid-invalid bit signifies that the page is in memory and 0 signifies that the page may be invalid or haven't
brought into the memory just yet.
5 Consider a logical address space of 128 pages of 1024 words each mapped onto a physical memory of 64 frames. How
many bits are there in logical and physical address?

6 What is virtual memory?


Virtual memory is a memory management technique where secondary memory can be used as if it were a part of the main
memory. Virtual memory is a very common technique used in the operating systems (OS) of computers.
Virtual memory uses hardware and software to allow a computer to compensate for physical memory shortages, by
temporarily transferring data from random access memory (RAM) to disk storage. In essence, virtual memory allows a
computer to treat secondary memory as though it were the main memory.
7 How paging and segmentation are not free from internal & external fragmentation respectively?
Paging are not free from internal fragmentation because a page has fixed size, but processes may request more or less space.
Say a page is 32 units, and a process requests 20 units. Then when a page is given to the requesting process, that page is no
longer useable despite having 12 units of free "internal" space. Paging are free from external fragmentation because in paging,
a process is allowed to be allocated spaces that are non-contiguous in the physical memory. Meanwhile, the logical
representation of those blocks will be contiguous in the virtual memory.
Segmentation also suffers from external fragmentation as a segment of a process is laid out contiguously in physical memory
and fragmentation would occur as segments of dead processes are replaced by segments of new processes. Segmentation,
however, enables processes to share code; for instance, two different processes could share a code segment but have distinct
data segments. Pure paging does not suffer from external fragmentation, but instead suffers from internal fragmentation.
Processes are allocated in page granularity and if a page is not completely utilized, it results in internal fragmentation and a
corresponding wastage of space. Paging also enables processes to share code at the granularity of pages
8 What is the use of wait-for-graph?

Wait-for-graph is one of the methods for detecting the deadlock situation. This method is suitable for smaller database. In this
method a graph is drawn based on the transaction and their lock on the resource. If the graph created has a closed loop or a
cycle, then there is a deadlock.
9 Define Priming in pump and why it is necessary?
10 Define slip and percentage of slip.

11 Differentiate between internal and external fragmentation.

S.NO INTERNAL FRAGMENTATION EXTERNAL FRAGMENTATION

In internal fragmentation fixed-sized memory, In external fragmentation, variable-sized memory blocks

1. blocks square measure appointed to process. square measure appointed to method.

Internal fragmentation happens when the External fragmentation happens when the method or

2. method or process is larger than the memory. process is removed.

The solution of internal fragmentation is best-fit Solution of external fragmentation is compaction, paging

3. block. and segmentation.


Internal fragmentation occurs when memory is External fragmentation occurs when memory is divided into

4. divided into fixed sized partitions. variable size partitions based on the size of processes.

The difference between memory allocated and The unused spaces formed between non-contiguous

required space or memory is called Internal memory fragments are too small to serve a new process, is

5. fragmentation. called External fragmentation .

12 How to recover from deadlock?

There are two options for breaking a deadlock:


i) To abort one or more processes to break the circular wait (Process Termination).
ii) To preempt some resources from one or more of deadlock processes (Resource Preemption).
13 Which parameters have to be maintained to implement the banker’s algorithm?

Let n = number of processes, and m = number of resource types.


– Available: Vector of length m indicates the number of available resources of each type. If available [j] = k, there are k
instances of resource type Rj available.
– Max: n x m matrix defines the maximum demand of each process. If Max [i,j] = k, then process Pi may request at most k
instances of resource type Rj .
– Allocation: n x m matrix defines the number of resources of each type currently allocated to each process. If Allocation[i,j]
= k then Pi is currently allocated k instances of of resource type Rj.
– Need: n x m matrix indicates the remaining resource need of each process. If Need[i,j] = k, then Pi may need k more
instances of Rj to complete its task. Need [i,j] = Max[i,j] – Allocation [i,j].
14 What do you mean by swapping? Why do we need this?

A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued
execution.
Similar to round-robin CPU-scheduling algorithm , when a quantum expires, the memory manager will swap out that process
to swap another process into the memory space that has been freed .
Need of swapping: It is required when there is a need of better processing speed and the RAM is not enough to handle that.
Although, there comes a point when even swapping does not work properly. Here is when one would require to enhance the
RAM and everything would be back to normal.
15 What are the approaches we follow for the address binding?

A user program will go through several steps, before being executed as shown. Address binding of instructions and data to
memory addresses can happen at three different stages
– Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location
changes
– Load time: Must generate relocatable code if memory location is not known at compile time
– Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment
to another. Need hardware support for address maps (e.g., base and limit registers)
16 What is the role of a page table in paging?

Page Table is a data structure used by the virtual memory system to store the mapping between logical addresses and physical
addresses. Logical addresses are generated by the CPU for the pages of the processes therefore they are generally used by the
processes. Physical addresses are the actual frame address of the memory. They are generally used by the hardware or more
specifically by RAM subsystems.
17 Differentiate between the basic and limit registers.

The base registers indicate where the page table starts in memory (this can be either a physical or logical
addresses) and the limit register indicates the side of the table. The registers are usually not loaded directly. Their values are
usually written to the hardware Process Context Block (PCB).
18 What is the purpose of wait for graph? Justify your answer.

Wait-for-graph is one of the methods for detecting the deadlock situation. This method is suitable for smaller database. In this
method a graph is drawn based on the transaction and their lock on the resource. If the graph created has a closed loop or a
cycle, then there is a deadlock.
19 What are the necessary conditions for deadlock?

Deadlock can arise if four conditions hold simultaneously


• Mutual exclusion: only one process at a time can use a resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.
• Hold and wait: a process is holding at least one resource and is waiting to acquire additional resources held by other
processes (i.e., you hold a resource and wait for another one).
• No preemption: a resource can be released only voluntarily by the process holding it, after that process has completed its task
(ex., in FCFS a process use the CPU until it terminates).
• Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1,
P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a
resource that is held by P0.
20 What is a resource-allocation graph?

The resource allocation graph is the pictorial representation of the state of a system. As its name suggests, the resource
allocation graph is the complete information about all the processes which are holding some resources or waiting for some
resources. It also contains the information about all the instances of all the resources whether they are available or being used
by the processes.In Resource allocation graph, the process is represented by a Circle while the Resource is represented by a
rectangle. Let's see the types of vertices and edges in detail.
21 What is the basic approach of Page Replacement?

• A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to
guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs
(primary storage and processor time) of the algorithm itself.
If no frame is free is available, find one that is not presently being used and free it. A frame can be freed by writing its
contents to swap space, and changing the page table to show that the page is no longer in memory. Now the freed frame can be
used to hold the page for which the process faulted.
22 What are the major problems to implement Demand Paging?
The two major problems to implement demand paging is developing
a. Frame allocation algorithm
b. Page replacement algorithm
23 Give the basic concepts about paging.
Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory. This scheme
permits the physical address space of a process to be non – contiguous.
24 Explain the basic concepts of segmentation.
In Operating Systems, Segmentation is a memory management technique in which, the memory is divided into the variable
size parts. Each part is known as segment which can be allocated to a process.The details about each segment are stored in a
table called as segment table. Segment table is stored in one (or many) of the segments.
25 Differentiate local and global page replacement algorithm.
Local replacement means that an incoming page is brought in only to the relevant process address space. Global
replacement policy allows any page frame from any process to be replaced. The latter is applicable to variable partitions
model only.
26 How is memory protected in a paged environment?
Memory protection is a way to control memory access rights on a computer, and is a part of most modern instruction set
architectures and operating systems. The main purpose of memory protection is to prevent a process from
accessing memory that has not been allocated to it.
Memory protection in a paged environment is accomplished by protection bits that are associated with each frame

PART C : LONG QUESTION ANSWERS


Consider the following page reference string 1,2,3,4,5,3,4,1,6,7,8,7,8,1,7,6,2,5,4,5,3,2
Calculate the number of page faults in each case using the following algorithms: (i) FIFO (ii) LRU (iii) Optimal.
1 Assume the memory size is 4 frames.

NO ANSWERS FOUND.
Write about segmentation with example discuss basic difference between paging and segmentation?
Segmentation
Segmentation is a memory management technique in which each job is divided into several segments of different sizes,
2 one for each module that contains pieces that perform related functions. Each segment is actually a different logical address
space of the program. When a process is to be executed, its corresponding segmentation are loaded into non-contiguous
memory though every segment is loaded into a contiguous block of available memory.
Following are the important differences between Paging and Segmentation.
Sr. Paging Segmentation
No.

In Paging, a process address space is In Segmentation, a process address space


1 broken into fixed sized blocks called is broken in varying sized blocks called
pages. sections.

Operating System divides the Compiler is responsible to calculate the


2 memory into pages. segment size, the virtual address and
actual address.

Page size is determined by available Section size is determined by the user.


3
memory.

Paging technique is faster in terms of Segmentation is slower than paging.


4
memory access.

Paging can cause internal Segmentation can cause external


5 fragmentation as some pages may go fragmentation as some memory block
underutilized. may not be used at all.

During paging, a logical address is During segmentation, a logical address is


6 divided into page number and page divided into section number and section
offset. offset.

During paging, a logical address is During segmentation, a logical address is


7 divided into page number and page divided into section number and section
offset. offset.

Page table stores the page data. Segmentation table stores the
8
segmentation data.

When do page fault occurs? Discuss the action taken by the operating system, when a page fault occurs.
3 A page fault occurs when an access to a page that has not been brought into main memory takes place. The operating
system verifies the memory access, aborting the program if it is invalid. If it is valid a free frame is located and I/O
requested to read the needed page into the free frame. Upon completion of I/O, the process table and page table are
updated and the instruction is restarted.

Handling Page Fault.

1. Check the location of the referenced page in the PMT


2. If a page fault occured, call on the operating system to fix it
3. Using the frame replacement algorithm, find the frame location
4. Read the data from disk to memory
5. Update the page map table for the process
6. The instruction that caused the page fault is restarted when the process resumes execution.

Consider the following snapshot of a system.


Allocation Max Available
ABCD ABCD ABCD
P0 0012 0012 1520
P1 1000 1750
P2 1354 2356
P3 0632 0652
P4 0014 0656
Using Banker’s algorithm, answer the following questions.
(i) What is the content of matrix need?
4 (ii) Is the system in a safe state?
(iii) If a request from process P1 arrives for (0, 4, 2, 0) can the request be granted immediately?
(i) Need [i, j] = Max [i, j] – Allocation [i, j]
So, the content of Need Matrix is:
(ii) Yes, the system is safe. The safe sequence is P1, P3, P4, P0, P2.

(iii) The values of Need for processes P0 through P4 respectively are (0, 0, 0, 0), (0, 7, 5, 0), (1, 0, 0, 2), (0, 0, 2, 0), and
(0, 6, 4, 2).

Distinguish between internal and external fragmentation. Provide any two solutions to avoid external
fragmentation.

S.NO INTERNAL FRAGMENTATION EXTERNAL FRAGMENTATION

In internal fragmentation fixed-sized In external fragmentation, variable-sized

memory, blocks square measure memory blocks square measure appointed


5
1. appointed to process. to method.

Internal fragmentation happens

when the method or process is larger External fragmentation happens when the

2. than the memory. method or process is removed.


The solution of internal Solution of external fragmentation is

3. fragmentation is best-fit block. compaction, paging and segmentation.

Internal fragmentation occurs when External fragmentation occurs when

memory is divided into fixed sized memory is divided into variable size

4. partitions. partitions based on the size of processes.

The difference between memory The unused spaces formed between non-

allocated and required space or contiguous memory fragments are too

memory is called Internal small to serve a new process, is called

5. fragmentation. External fragmentation .

Solution to external fragmentation:


1) Compaction: shuffling the fragmented memory into one contiguous location.
2) Virtual memory addressing by using paging and segmentation.

Explain about the necessary conditions for deadlock. How we can prevent deadlock by using these?

When a process requests for the resource that is been held another process which needs another resource to continue, but is
been held by the first process, then it is called a deadlock.

There are 4 conditions necessary for the occurrence of a deadlock.


1. Mutual Exclusion:
When two people meet in the landings, they can’t just walk through because there is space only for one person.
This condition to allow only one person (or process) to use the step between them (or the resource) is the first
6 condition necessary for the occurrence of the deadlock.
2. Hold and Wait:
When the 2 people refuses to retreat and hold their grounds, it is called holding. This is the next necessary
condition for the the deadlock.
3. No Preemption:
For resolving the deadlock one can simply cancel one of the processes for other to continue. But Operating System
doesn’t do so. It allocates the resources to the processors for as much time needed until the task is completed.
Hence, there is no temporary reallocation of the resources. It is third condition for deadlock.
4. Circular Wait:
When the two people refuses to retreat and wait for each other to retreat, so that they can complete their task, it is
called circular wait. It is the last condition for the deadlock to occur.
Deadlocks can be avoided by avoiding at least one of the four conditions, because all this four conditions are required
simultaneously to cause deadlock.
Mutual Exclusion
Resources shared such as read-only files do not lead to deadlocks but resources, such as printers and tape drives, requires
exclusive access by a single process.
Hold and Wait
In this condition processes must be prevented from holding one or more resources while simultaneously waiting for one
or more others.
No Preemption
Preemption of process resource allocations can avoid the condition of deadlocks, where ever possible.
Circular Wait
Circular wait can be avoided if we number all resources, and require that processes request resources only in strictly
increasing(or decreasing) order.

Write short notes on


i. TLB
ii. Semaphore
(i)TLB
• Operating system divides each incoming programs into pages of equal size. the sections of the disks are called as
blocks or sector.
• The section of main memory are called as page frame, one sector will hold one page of job instruction and fit
into one page of memory.
• Fixed size blocks in main memory are called page frame and breaking of logical memory into blocks of same
size are called pages.
7
• CPU generates, logical address containing page number and an effect. the page number is used to retrieve the
frame number from a page table which gives the page addr so that physical address is calculated as base + offset.
• To implement the page table in memory set of register are used of page table size increases their register are not
suitable, then special purpose hardware cache is used for page table called as translation look aside buffer (TLB)
• TLB is very effective in improving the performance of the page frame access. the cache badtn is implemented in
a technology which is faster then the primary memory technology.

(ii)Semaphore

Semaphores are often used to restrict the number of threads than can access some (physical or logical) resource.
Semaphores are devices used to help with synchronization. If multiple processes share a common resource, they need a
way to be able to use that resource without disrupting each other. You want each process to be able to read from and write
to that resource uninterrupted.
A semaphore will either allow or disallow access to the resource, depending on how it is set up. One example setup
would be a semaphore which allowed any number of processes to read from the resource, but only one could ever be in
the process of writing to that resource at a time.

Semaphores are commonly use for two purposes: to share a common memory space and to share access to files.
Semaphores are one of the techniques for inter process communication (IPC). The C programming language provides a
set of interfaces or “functions” for managing semaphores.

Given memory partitions of 120K, 520K, 320K, 324K and 620K (in order). How would each of the First fit, Best fit
and worst fit algorithms place processes of 227K, 432K, 127K and 441K (in order)? Which algorithm makes the
8 most efficient use of memory?

NO ANSWERS FOUND
Distinguish between internal and external fragmentation. Provide any two solutions to avoid external
9 fragmentation.
REFER QUESTION NO. 5
Consider the following snapshot of a system Available Allocation Max
ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
10 P3 211 222
P4 002 433
i. What is the content of the matrix Need?
ii. Is the system in a safe state? If yes, what is the safe sequence? Show the detailed steps as per Banker’s
algorithm?

REFER QUESTION NO. 4


Consider the following segment table
Segment Base Length

11 0 240 500
1 2150 28
2 180 60
3 1175 470
4 1482 55
What are the physical addresses for the following logical addresses?
(a) 0,280 (b) 1,20 (c) 2,150 (d) 3,320 (e) 4,188
NO ANSWERS FOUND
What do you mean by Thrashing? What are the methods to avoid Thrashing?
If the process does not have number of frames it needs to support pages in active use, it will quickly page-fault.
The high paging activity is called thrashing.

We must provide a process with as many frames as it needs. Several techniques are used.

The Working of Set Model (Strategy) It starts by looking at how many frames a process is actually using. This defines
12 the locality model.

Locality Model It states that as a process executes, it moves from locality to locality.

A locality is a set of pages that are actively used together.

A program is generally composed of several different localities which overlap.


Unit 4
PART – A: (Multiple Choice Questions) 2x10=20
Marks

1 RAID level _____ is also known as block interleaved parity organisation and uses block level striping and keeps a
parity block on a seperate disk.

A. 1
B. 2
C. 3
D. 4

2 In RAID level 4, one block read, accesses __________.

A. only one disk


B. all disks simultaneously
C. all disks sequentially
D. None of these

3 Access in which records are accessed from and inserted into file, is classified as

1. direct access
2. sequential access
3. random access
4. duplicate access

4 Preparation of disc for subsequent file storage is classified as

1. disc format
2. disc address
3. disc footer
4. disc header

5 Secondary storage memory is basically

1. volatile memory
2. non volatile memory
3. backup memory
4. impact memory

6 In fixed head discs, sum of rotational delay and transfer time is equals to

1. access time
2. delay time
3. processing time
4. storage time

7 Piece of time taken by disc to rotate and read data from right place is classified as

1. rotational delay
2. access delay
3. seek time delay
4. reversal delay

8 Which of then following is example of direct access?

1. magnetic disc
2. floppy disc
3. program tape
4. plain disc

9 File which is created to carry out processing of data is classified as

1. master file
2. transaction file
3. particular file
4. reference file

10 Possible dangers and threats for files are that fie can be

1. destroyed
2. modified
3. accessed
4. all of above

UNIT-4
PART – B: (Short Answer Questions)

1 Elaborate about the file allocation techniques.


Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk
Linked List Allocation
In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk blocks can be scattered
anywhere on the disk.
Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all the blocks occupied by a file. Each
file has its own index block

2 Write short note on SCAN scheduling.


In SCAN disk scheduling algorithm, head starts from one end of the disk and moves towards the other end, servicing
requests in between one by one and reach the other end. Then the direction of the head is reversed and the process
continues as head continuously scan back and forth to access the disk.
3 How CLOOK scheduling works?
• Circular-LOOK Algorithm is an improved version of the LOOK Algorithm.
• Head starts from the first request at one end of the disk and moves towards the last request at the other end servicing
all the requests in between.
• After reaching the last request at the other end, head reverses its direction.
• It then returns to the first request at the starting end without servicing any request in between.
• The same process repeats.

4 Differentiate between sequential access and direct access.


Sequential access must begin at the beginning and access each element in order, one after the other. Direct access
allows the access of any element directly by locating it by its index number or address. Arrays allow direct access.
Magnetic tape has only sequential access, but CDs had direct access. If you are on a railroad train, to go from one car to
another you must use sequential access. But when you board the train initially you have direct access. Direct access is
faster than sequential access, but it requires some external mechanism (array index, file byte number, railroad platform)
5 Explain the attributes of a file.

• Name . It is the only information which is in human-readable form.


• Identifier. The file is identified by a unique tag(number) within file system.
• Type. It is needed for systems that support different types of files.
• Location. Pointer to file location on device.
• Size. The current size of the file.
• Protection. This controls and assigns the power of reading, writing, executing.
• Time, date, and user identification. This is the data for protection, security, and usage monitoring.

6 Differentiate between contiguous allocation and linked allocation.


Linked allocation is also called chained allocation
In each chain the block contains the pointer to the next link
Reallocation is possible.
In contiguous allocation allocation is used using the first bit.
There is a need for compaction
When files grow there is a problem with the files.
In indexed allocation contains separate level index for each file.
There is only one entry for each.

7 Explain briefly about DMA..


Direct Memory Access (DMA) transfers the block of data between the memory and peripheral devices of the
system, without the participation of the processor. The unit that controls the activity of accessing memory directly is
called a DMA controller.
The processor relinquishes the system bus for a few clock cycles. So, the DMA controller can accomplish the task of
data transfer via the system bus
8 What do you mean by seek time?
Seek time is the time taken for a hard disk controller to locate a specific piece of stored data. Other delays include
transfer time (data rate) and rotational delay (latency)

UNIT-4
PART – C: (Long Answer Questions)

What is a file? Explain various file allocation techniques with their advantages and disadvantages.
A file is a named collection of related information that is recorded on secondary storage such as magnetic disks, magnetic
tapes and optical disks. In general, a file is a sequence of bits, bytes, lines or records whose meaning is defined by
the files creator and user.

There are mainly three methods of file allocation in the disk. Each method has its advantages and disadvantages. Mainly a
system uses one method for all files within the system.

• Contiguous allocation
• Linked allocation
• Indexed allocation
1
Contiguous allocation

In this scheme, a file is made from the contiguous set of blocks on the disk. Linear ordering on the disk is defined by the
disk addresses.

• Each file in the disk occupies a contiguous address space on the disk.
• In this scheme, the address is assigned in the linear fashion.
• The is very easy to implement the contiguous allocation method.
• In the contiguous allocation technique, external fragmentation is a major issue.
Advantages:

• In the contiguous allocation, sequential and direct access both are supported.
• For the direct access, the starting address of the kth block is given and further blocks are obtained by b+K,
• This is very fast and the number of seeks is minimal in the contiguous allocation method.

Disadvantages:

• Contiguous allocation method suffers internal as well as external fragmentation.


• In terms of memory utilization, this method is inefficient.
• It is difficult to increase the file size because it depends on the availability of contiguous memory.

Linked allocation

The problems of contiguous allocation are solved in the linked allocation method. In this scheme, disk blocks are
arranged in the linked list form which is not contiguous.

Advantages:

1. In terms of the file size, this scheme is very flexible.


2. We can easily increase or decrease the file size and system does not worry about the contiguous chunks of
memory.
3. This method free from external fragmentation this makes it better in terms of memory utilization.

Disadvantages:

1. In this scheme, there is large no of seeks because the file blocks are randomly distributed on disk.
2. Linked allocation is comparatively slower than contiguous allocation.
3. Random or direct access is not supported by this scheme we cannot access the blocks directly.
4. The pointer is extra overhead on the system due to the linked list.

Indexed Allocation

In this scheme, a special block known as the index block contains the pointer to all the blocks occupied by a file. each file
contains its index which is in the form of an array of disk block addresses.

Advantages:

1. This scheme supports random access of the file.


2. This scheme provides fast access to the file blocks.
3. This scheme is free from the problem of external fragmentation.

Disadvantages:

1. The pointer head is relatively greater than the linked allocation of the file.
2. Indexed allocation suffers from the wasted space.
3. For the large size file, it is very difficult for single index block to hold all the pointers.
4. For very small files say files that expend only 2-3 blocks the indexed allocation would keep on the entire block
for the pointers which is insufficient in terms of memory utilization.

Suppose that the head of a moving hard disk with 200 tracks, numbered 0 to 199, is currently serving a request at
track 143 and has just finished a request at 125. The queue of request is kept in the FIFO order 86, 147, 91, 177,
94, 150, 102, 175 and 130.
What is the total number of head movements needed to satisfy these requests for the following disk-scheduling
algorithms?
1. FCFS Scheduling
2. SSTF Scheduling
2 3. SCAN Scheduling

Discuss the linked allocation and index allocation schemes for a file allocation. Compare the index allocation scheme
with the contiguous allocation scheme.
Linked allocation

The problems of contiguous allocation are solved in the linked allocation method. In this scheme, disk blocks are
arranged in the linked list form which is not contiguous. The disk block is scattered in the disk. In this scheme, the
directory entry contains the pointer of the first block and pointer of the ending block. These pointers are not for the users.
For example, a file of six blocks starts at block 10 and end at the block. Each pointer contains the address of the next
block. When we create a new file we simply create a new entry with the linked allocation. Each directory contains the
pointer to the first disk block of the file. when the pointer is nil then it defines the empty file.

Indexed Allocation
3
In this scheme, a special block known as the index block contains the pointer to all the blocks occupied by a file. each file
contains its index which is in the form of an array of disk block addresses. The ith entry of index block point to the ith
block of the file. The address of the index block is maintained by the directory. When we create a file all pointer is set to
nil. A block is obtained from the free space manager when the first ith block is written. When the index block is very
small it is difficult to hold all the pointers for the large file. to deal with this issue a mechanism is available. Mechanism
includes the following:

• Linked scheme
• Multilevel scheme
• Combined scheme
Answer the following
1. Disk Structure
2. RAID Structure.

1. The actual physical details of a modern hard disk may be quite complicated. Simply, there are one or more surfaces, each
of which contains several tracks, each of which is divided into sectors. There is one read/write head for every surface of the
disk.

Also, the same track on all surfaces is knows as a 'cylinder'. When talking about movement of the read/write head, the
cylinder is a useful concept, because all the heads (one for each surface), move in and out of the disk together.

4 2. RAID, or “Redundant Arrays of Independent Disks” is a technique which makes use of a combination of multiple disks
instead of using a single disk for increased performance, data redundancy or both. The term was coined by David Patterson,
Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987.

Key evaluation points for a RAID System


• Reliability: How many disk faults can the system tolerate?
• Availability: What fraction of the total session time is a system in uptime mode, i.e. how available is the system for
actual use?
• Performance: How good is the response time? How high is the throughput (rate of processing work)? Note that
performance contains a lot of parameters and not just the two.
• Capacity: Given a set of N disks each with B blocks, how much useful capacity is available to the user?

Describe Caching, Spooling and Buffering services of kernel I/O sub-system.


• Buffering − Kernel I/O Subsystem maintains a memory area known as buffer that stores data while they are
transferred between two devices or between a device with an application operation. Buffering is done to cope with
a speed mismatch between the producer and consumer of a data stream or to adapt between devices that have
different data transfer sizes.
• Caching − Kernel maintains cache memory which is region of fast memory that holds copies of data. Access to
5
the cached copy is more efficient than access to the original.
• Spooling and Device Reservation − A spool is a buffer that holds output for a device, such as a printer, that
cannot accept interleaved data streams. The spooling system copies the queued spool files to the printer one at a
time. In some operating systems, spooling is managed by a system daemon process. In other operating systems, it
is handled by an in kernel thread.

Explain contiguous, indexed and linked allocation of disk space.


Contiguous Allocation: – Contiguous allocation is one of the most used methods for allocation. Contiguous allocation
means we allocate the block in such a manner, so that in the hard disk, all the blocks get the contiguous physical block.
B We can see in the below figure that in the directory, we have three files. In the table, we have mentioned the starting
block and the length of all the files. We can see in the table that for each file, we allocate a contiguous block.
Linked allocation
The problems of contiguous allocation are solved in the linked allocation method. In this scheme, disk blocks are
arranged in the linked list form which is not contiguous. The disk block is scattered in the disk. In this scheme, the
directory entry contains the pointer of the first block and pointer of the ending block. These pointers are not for the users.
For example, a file of six blocks starts at block 10 and end at the block. Each pointer contains the address of the next
block. When we create a new file we simply create a new entry with the linked allocation. Each directory contains the
pointer to the first disk block of the file. when the pointer is nil then it defines the empty file.

Indexed Allocation

In this scheme, a special block known as the index block contains the pointer to all the blocks occupied by a file. each file
contains its index which is in the form of an array of disk block addresses. The ith entry of index block point to the ith
block of the file. The address of the index block is maintained by the directory. When we create a file all pointer is set to
nil. A block is obtained from the free space manager when the first ith block is written. When the index block is very
small it is difficult to hold all the pointers for the large file. to deal with this issue a mechanism is available. Mechanism
includes the following:

• Linked scheme
• Multilevel scheme
• Combined scheme

You might also like