0% found this document useful (0 votes)
8 views

BSc_3rdSem_Course8_OS

The document provides a comprehensive overview of operating systems, including their definition, history, functions, and types. It covers various units such as process management, memory management, and file management, detailing concepts like deadlocks, IPC, and scheduling algorithms. Additionally, it discusses resource abstraction and its benefits in simplifying hardware interaction for application programmers.

Uploaded by

sudheerkuc.songs
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

BSc_3rdSem_Course8_OS

The document provides a comprehensive overview of operating systems, including their definition, history, functions, and types. It covers various units such as process management, memory management, and file management, detailing concepts like deadlocks, IPC, and scheduling algorithms. Additionally, it discusses resource abstraction and its benefits in simplifying hardware interaction for application programmers.

Uploaded by

sudheerkuc.songs
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 80

Operating Systems 1

Unit 1 : Introduction, History & Evolution, Operating system functions, Types

01. Define Operating system. Write about history & evolution of Operating Systems. - 03
02. Write about functions of Operating Systems. - 05
03. What are the different types of Operating Systems? - 06
04. Explain about Resource Abstraction in OS. - 11

Unit 2 : Threads, Process Scheduling & Scheduling Algorithms

01. Explain system calls & system programs. - 13


02. Write about user view & system view of the process & resources. - 15
03. Write differences between user mode & kernel mode. - 17
04. Write about process abstraction & process hierarchy. - 18
05. What are the different types of threads? Explain benefits of threads. - 19
06. Explain threading issues & thread libraries. - 22
07. Define process scheduling. Explain process scheduling algorithms (preemptive & Non-preemptive) - 26

Unit 3 : Process Management, Deadlocks, Methods for IPC,


Process Synchronization problems
01. What is deadlock? Write about deadlock characterization. [or] - 31
Explain necessary & sufficient conditions for deadlocks?
02. Write about deadlock handling approaches. [or] - 33
Explain: Deadlock Prevention, Deadlock Avoidance.
03. Explain: Concurrent Processes or Concurrency. - 36
04. Write about critical section & Semaphores. - 38
05. Write about Inter Process Communication (IPC). - 41
06. What is process synchronization? Write about classical problems of process synchronization. - 45

Unit 4 : Memory Management, Paging, Segmentation, Virtual Memory

01. Explain Physical & Virtual Address Space. - 49


02. Write about Memory allocation strategies (fixed & variable partitions). - 52
03. Write about Paging & Segmentation. - 54
04. Write about Virtual Memory. - 58
05. Explain Page Replacement Algorithms. - 60

Operating Systems – Sudheer Kumar Kasulanati


2

Unit 5 : File & I/O Management, Disk Scheduling Algorithms

01. Write short note on Directory Structure. - 63


02. Explain File Allocation Methods. - 64
03. Write about Device management, Pipes, Buffer & Shared Memory. - 67
04. Explain: Disk Scheduling Algorithms. - 72

Model Paper 1 - 77
Model Paper 2 - 78

Operating Systems – Sudheer Kumar Kasulanati


Unit 1 : Introduction, History & Evolution, 3
Operating system functions, Types

01. Define Operating system. Write about history & evolution of Operating Systems.

Operating System:
- The operating system is a system program that serves as an interface between the computing
system and the end-user.
- Operating systems create an environment where the user can run any programs or communicate
with software or applications in a comfortable and well-organized way.
- An operating is a software program that manages and controls the execution of application
programs, software resources and computer hardware.
- It also helps manage the software/hardware resource, such as file management, memory
management, input/ output and many peripheral devices like a disk drive, printers, etc.
- Some of the popular operating systems are: Linux OS, Windows OS, Mac OS etc.

Generations of Operating Systems:


The First Generation (1940 to early 1950s):

- When the first electronic computer was developed in 1940, it was created without any operating
system.
- In early times, users have full access to the computer machine and write a program for each task
in machine language.
- The programmer can perform and solve only simple mathematical calculations during the
computer generation, and this calculation does not require an operating system.

Operating Systems – Sudheer Kumar Kasulanati


The Second Generation (1955 - 1965):
4
- The first operating system (OS) was created in the early 1950s and was known as GMOS.
- General Motors has developed OS for the IBM computer.
- The second-generation operating system was based on a single stream batch processing system
because it collects all similar jobs in groups or batches and then submits the jobs to the operating
system using a punch card to complete all jobs in a machine.
- At each completion of jobs (either normally or abnormally), control is transfer to the operating
system that is cleaned after completing one job and then continues to read and initiates the next
job in a punch card.
- After that, new machines were called mainframes, which were very big and used by professional
operators.

The Third Generation (1965 - 1980):

- During the late 1960s, designers develop a new operating system that can perform multiple
tasks in a single computer program called multiprogramming.
- The introduction of multiprogramming plays a very important role in developing operating
systems that allow a CPU to be busy every time by performing different tasks on a computer at
the same time.
- During the third generation, there was a new development of minicomputer's phenomenal
growth starting in 1961 with the DEC PDP-1. These PDP's leads to the creation of personal
computers in the fourth generation.

The Fourth Generation (1980 - Present):

- The fourth generation of operating systems is related to the development of the personal
computer. However, the personal computer is very similar to the minicomputers that were
developed in the third generation.
- The cost of a personal computer was very high at that time. A major factor related to creating
personal computers was the birth of Microsoft and the Windows operating system.
- In 1981, Microsoft introduced the MS-DOS (Disc Operating System); however, it was very
difficult for the person to understand its commands.
- After that, Windows released various operating systems such as Windows 95, Windows 98,
Windows XP, Windows 7 etc.
- Currently, most Windows users use the Windows 10 operating system.
- Besides the Windows operating system, Apple is another popular operating system built in the
1980s, and this operating system was developed by Steve Jobs, a co-founder of Apple.
- They named the operating system Macintosh OS or Mac OS.

Advantages of Operating System


- It is helpful to monitor and regulate resources.
- It can easily operate since it has a basic graphical user interface to communicate with your device.
- It is used to create interaction between the users and the computer application or hardware.
- The performance of the computer system is based on the CPU.
- The response time and throughput time of any process or program are fast.
- It can share different resources like fax, printer, etc.
- It also offers a forum for various types of applications like system and web application.
Operating Systems – Sudheer Kumar Kasulanati
Disadvantages of the Operating System 5
- It allows only a few tasks that can run at the same time.
- It any error occurred in the operating system, the stored data can be destroyed.
- It is a very difficult task or works for the OS to provide entire security from the viruses because
any threat or virus can occur at any time in a system.
- An unknown user can easily use any system without the permission of the original user.
- The cost of operating system costs is very high.

02. Write about functions of Operating Systems.

Operating System:

- Operating system acts as an interface between the user & h/w components of the computer.
- Operating system is the first program to be loaded into the computer during booting & remains in the
memory all the time.
 The basic functions of operating systems are listed below.

Functions of Operating Systems:

- Performs basic computer tasks such as managing keyboard, mouse, printer etc.
- When a new device is connected to the computer, it will be automatically detected.
- The operating system manages Computer’s resources like CPU, memory & I/O devices.
- The operating system provides user interface to easily interact with the computer. These are
- CLI(Command Line Interface) & GUI(Graphical User Interface).
- Operating system provides interface for the user to develop application programs & make sure
that these applications run on other computers with same or different h/w.
- The operating system enables the user to execute more than one process at a time.
- The operating system is responsible for allocating memory to different processes.
- The operating system enables the user to create, copy, delete, move, rename a file.
- The operating system provide security for the data in the computer.
- The operating system provides networking to share of data between multiple systems.

Operating Systems – Sudheer Kumar Kasulanati


03. What are the different types of Operating Systems?
6
Types of Operating System

1. Batch Operating System


2. Time-Sharing Operating System
3. Embedded Operating System
4. Multiprogramming Operating System
5. Network Operating System
6. Distributed Operating System
7. Multiprocessing Operating System
8. Real-Time Operating System

Batch Operating System


 In Batch Operating System, there is no direct interaction between user and computer.
 Therefore, the user needs to prepare jobs and save offline mode to punch card or paper
tape or magnetic tape.
 After creating the jobs, hand it over to the computer operator; then the operator sort or
creates the similar types of batches like B2, B3, and B4.
 Now, the computer operator submits batches into the CPU to execute the jobs one by one.
After that, CPUs start executing jobs, and when all jobs are finished, the computer operator
provides the output to the user.

Time-Sharing Operating System


 It is the type of operating system that allows us to connect many people located at different
locations to share and use a specific system at a single time.
 The time-sharing operating system is the logical extension of the multiprogramming
through which users can run multiple tasks concurrently.
 Furthermore, it provides each user his terminal for input or output that impacts the program
or processor currently running on the system.
 It represents the CPU's time is shared between many user processes. Or, the processor's
time that is shared between multiple users simultaneously termed as time-sharing.

Operating Systems – Sudheer Kumar Kasulanati


Embedded Operating System
7
 The Embedded operating system is the specific purpose operating system used in the
computer system's embedded hardware configuration.
 These operating systems are designed to work on dedicated devices like automated teller
machines (ATMs), airplane systems, digital home assistants, and the internet of things (IOT)
devices.

Multiprogramming Operating System


 Due to the CPU's underutilization and the waiting for I/O resource till that CPU remains idle,
It shows the improper use of system resources.
 Hence, the operating system introduces a new concept, known as multiprogramming.
 A multiprogramming operating system refers to the concepts wherein two or more
processes or programs activate simultaneously to execute the processes one after another
by the same computer system.
 When a program is in run mode and uses CPU, another program or file uses I/O resources
at the same time or waiting for another system resources to become available.
 It improves the use of system resources, such a system is known as a multiprogramming
operating system.

Operating Systems – Sudheer Kumar Kasulanati


Network Operating System
8
 A network operating system is an important category of the operating system that operates
on a server using network devices like a switch, router, or firewall to handle data, applications
and other network resources.
 It provides connectivity among the autonomous operating system, called as a network
operating system.
 The network operating system is also useful to share data, files, hardware devices and printer
resources among multiple computers to communicate with each other.

Types of network operating system:


Peer-to-peer network operating system: This type of network operating system allows
users to share files, resources between two or more computer machines using a LAN.

Client-Server network operating system: It is the type of network operating system that allows
the users to access resources, functions, and applications through a common server or center hub
of the resources. The client workstation can access all resources that exist in the central hub of the
network. Multiple clients can access and share different types of the resource over the network
from different locations.

Operating Systems – Sudheer Kumar Kasulanati


9

Distributed Operating system


 A distributed operating system provides an environment in which multiple independent
CPU or processor communicates with each other through physically separate
computational nodes.
 Each node contains specific software that communicates with the global aggregate
operating system.
 The programmer or developer can easily access any operating system and resource to
execute the computational tasks and achieve a common goal.
 It is the extension of a network operating system that facilitates a high degree of
connectivity to communicate with other users over the network.

Operating Systems – Sudheer Kumar Kasulanati


Multiprocessing Operating System
10
 It is the type of operating system that refers to using two or more central processing units
(CPU) in a single computer system.
 However, these multiprocessor systems or parallel operating systems are used to increase
the computer system's efficiency.
 With the use of a multiprocessor system, they share computer bus, clock, memory and input
or output device for concurrent execution of process or program and resource management
in the CPU.

Real-Time Operating System


 A real-time operating system is an important type of operating system used to provide
services and data processing resources for applications in which the time interval required
to process & respond to input/output should be so small without any delay real-time system.
 For example, real-life situations governing an automatic car, traffic signal, nuclear reactor or
an aircraft require an immediate response to complete tasks within a specified time delay.
 Hence, a real-time operating system must be fast and responsive for an embedded system,
weapon system, robots, scientific research & experiments and various real-time objects.

Types of the real-time operating systems:

Hard Real-Time System


 These types of OS are used with those required to complete critical tasks within the
defined time limit.
 If the response time is high, it is not accepted by the system or may face serious issues like
a system failure.
 In a hard real-time system, the secondary storage is either limited or missing, so these
system stored data in the ROM.

Soft Real-Time System


 A soft real-time system is a less restrictive system that can accept software and hardware
resources delays by the operating system.
 In a soft real-time system, a critical task prioritizes less important tasks, and that priority
retains active until completion of the task.
 Also, a time limit is set for a specific job, which enables short time delays for further tasks
that are acceptable.
 For example, computer audio or video, virtual reality, reservation system etc.

Operating Systems – Sudheer Kumar Kasulanati


04. Explain about Resource Abstraction in OS.
11
Resource Abstraction:

- Resource abstraction is the process of "hiding the details of how the hardware
operates, thereby making computer hardware relatively easy for an application
programmer to use".
- One way in which the operating system might implement resource abstraction is to
provide a single abstract disk interface which will be the same for both the hard disk
and floppy disk.
- Such an abstraction saves the programmer from needing to learn the details of both
hardware interfaces. Instead, the programmer only needs to learn the disk
abstraction provided by the operating system.

Resources in operating system:

- Resources include the Central Processing Unit (CPU), Memory, File storage,
Input/Output (I/O) devices, and Network connections.

- An operating system abstraction layer (OSAL) provides an application programming


interface (API) to an abstract operating system making it easier and quicker to
develop code for multiple software or hardware platforms.

- In process abstraction, details of the threads of execution are not visible to the
consumer of the process.

- An example of process abstraction is the concurrency scheduler in a database


system. A database system can handle many concurrent queries.

Benefits of resource abstraction:

- While making the hardware easier to use, resource abstraction also limits the
specific level of control over the hardware by hiding some functionality behind the
abstraction.

- Since most application programmers do not need such a high level of control, the
abstraction provided by the operating system is generally very useful.

Operating Systems – Sudheer Kumar Kasulanati


12

Operating Systems – Sudheer Kumar Kasulanati


Unit 2 : Threads, Process Scheduling & Scheduling 13
Algorithms

01. Explain system calls & system programs.

System Call:

 In computing, a system call is the programmatic way in which a computer program


requests a service from the kernel of the operating system it is executed on.
 A system call is a way for programs to interact with the operating system.
 A computer program makes a system call when it makes a request to the operating
system’s kernel.
 System call provides the services of the operating system to the user programs via
Application Program Interface(API).
 It provides an interface between a process and operating system to allow user-level
processes to request services of the operating system.
 System calls are the only entry points into the kernel system. All programs needing
resources must use system calls.

Following are the services provided by System Calls :

 Process creation and management


 Main memory management
 File Access, Directory and File system management
 Device handling(I/O)
 Protection
 Networking, etc.

Types of System Calls : There are 5 different categories of system calls :

 Process control : end, abort, create, terminate, allocate and free memory.
 File management : create, open, close, delete, read file etc.
 Device management
 Information maintenance
 Communication

Examples of Window & Unix System Calls :

Windows Unix
Process control CreateProcess() fork()
ExitProcess() exit()
WaitForSingleObject() wait()
File Manipulation CreateFile() open()
ReadFile() read()
WriteFile() write()
CloseHandle() close()

Operating Systems – Sudheer Kumar Kasulanati


System Programs in Operating System:
14
 System Programming can be defined as the act of building System Software using
System Programming Languages.
 According to Computer Hierarchy, one which comes at last is Hardware. Then it is
Operating System, System Programs, and finally Application Programs.
 Program Development and Execution can be done conveniently in System Programs.
 System programs traditionally lies between the user interface and system calls.

System Programs can be divided into these categories :


File Management –
File management is defined as the process of manipulating files in the computer system,
its management includes the process of creating, modifying and deleting files.
Status Information –
Information like date, time, amount of available memory, or disk space is asked by some
users. Others providing detailed performance, logging, and debugging information which
is more complex.

File Modification –
For modifying the contents of files we use this.

Programming-Language support –
Compilers, Assemblers, Debuggers, and interpreters which are provided to users.

Program Loading and Execution –


When the program is ready after Assembling and compilation, it must be loaded into
memory for execution. A loader is part of an operating system that is responsible for
loading programs and libraries. It is one of the essential stages for starting a program.

Communications –
Virtual connections among processes, users, and computer systems are provided by
system programs. Users can send messages to another user on their screen.

Operating Systems – Sudheer Kumar Kasulanati


02. Write about user view & system view of the process & resources.
15
User View & System View :
 An operating system is a construct that allows the user application programs to interact with
the system hardware.
 Operating system by itself does not provide any function but it provides an atmosphere in
which different applications and programs can do useful work.
 The operating system can be observed from the point of view of the user or the system.
 This is known as the user view and the system view respectively.
 The user viewpoint focuses on how the user interacts with the operating system through the
usage of various application programs.
 In contrast, the system viewpoint focuses on how the hardware interacts with the operating
system to complete various tasks.

User View :
 The user view depends on the system interface that is used by the users. The different types
of user view experiences can be explained as follows :
 If the user is using a personal computer, the operating system is largely designed to make
the interaction easy. Some attention is also paid to the performance of the system, but
there is no need for the operating system to worry about resource utilization. This is
because the personal computer uses all the resources available.
 If the user is using a system connected to a mainframe or a minicomputer, the operating
system makes sure that all the resources such as CPU, Memory, I/O devices etc. are
divided uniformly between the systems in the network.
 If the user is sitting on a workstation connected to other workstations through networks,
then the operating system needs to focus on both individual usage of resources and
sharing though the network.
 If the user is using a handheld computer such as a mobile, then the operating system
handles the usability of the device including a few remote operations. The battery level of
the device is also taken into account.
 There are some devices that contain very less or no user view because there is no interaction
with the users. Examples are embedded computers in home devices, automobiles etc.

Operating Systems – Sudheer Kumar Kasulanati


System View
16
 According to the computer system, the operating system is the bridge between applications
and hardware. It is most intimate with the hardware and is used to control it as required.
 The different types of system view for operating system can be explained as follows:
 The operating system can also work as a control program. It manages all the processes and
I/O devices so that the computer system works smoothly and there are no errors. It makes
sure that the I/O devices work in a proper manner without creating problems.
 Operating systems can also be viewed as a way to make using hardware easier.
 Computers were required to easily solve user problems. However it is not easy to work
directly with the computer hardware. So, operating systems were developed to easily
communicate with the hardware.
 An operating system can also be considered as a program running at all times in the
background of a computer system (known as the kernel) and handling all the application
programs. This is the definition of the operating system that is generally followed.
 The hardware and the operating system interact for a variety of reasons, including:

1. Resource Allocation

 The hardware contains several resources like registers, RAM, ROM, CPUs, I/O interaction
etc. These are all resources that the operating system needs when an application program
demands them.
 Only the operating system can allocate resources, and it has used several tactics and
strategies to maximize its processing and memory space.
 The operating system uses a variety of strategies to get the most out of the hardware
resources, including paging, virtual memory, caching, and so on.
 These are very important in the case of various user viewpoints because inefficient
resource allocation may affect the user viewpoint, causing the user system to lag or hang,
reducing the user experience.

2. Control Program

 The control program controls how input and output devices (hardware) interact with the
operating system.
 The user may request an action that can only be done with I/O devices; in this case, the
operating system must also have proper communication, control, detect, and handle such
devices.

Operating Systems – Sudheer Kumar Kasulanati


03. Write differences between user mode & kernel mode.
17
User Mode:
 When a Program is booted up on an Operating system launches the program in user mode. And
when a user-mode program requests to run, a process and virtual address space (address space
for that process) is created for it by OS.
 User-mode programs are less privileged that is user-mode applications are not allowed to access
the system resources directly.
 For instance, if an application under user-mode wants to access system resources, it will have to
first go through the Operating system kernel by using syscalls.

Kernel Mode:
 The kernel is the core program on which all the other operating system components rely.
 It is used to access the hardware components and schedule which processes should run on a
computer system and when, and it also manages the application software and hardware
interaction.
 Hence it is the most privileged program, unlike other programs it can directly interact with the
hardware.
 When programs running under user mode need hardware access for example like webcam, then
first it has to go through the kernel by using a syscall, and to carry out these requests the CPU
switches from user mode to kernel mode at the time of execution.
 After finally completing the execution of the process the CPU again switches back to the user
mode.

Differences Between Kernel mode and User mode:

Criteria Kernel Mode User Mode

Kernel-mode vs In kernel mode, the program has direct and In user mode, the application
User mode unrestricted access to system resources. program executes and starts out.

In Kernel mode, the whole operating system In user mode, a single process fails if
Interruptions might go down if an interrupt occurs an interrupt occurs.

User mode is also known as the


Kernel mode is also known as the master unprivileged mode, restricted mode,
Modes mode, privileged mode, or system mode. or slave mode.

Virtual address In kernel mode, all processes share a single In user mode, all processes get
space virtual address space. separate virtual address space.

Operating Systems – Sudheer Kumar Kasulanati


Criteria Kernel Mode User Mode 18

In kernel mode, the applications have more While in user mode the applications
Level of privilege privileges as compared to user mode. have fewer privileges.

As kernel mode can access both the user While user mode needs to access
programs as well as the kernel programs kernel programs as it cannot directly
Restrictions there are no restrictions. access them.

While; the mode bit of user-mode is


Mode bit value The mode bit of kernel-mode is 0. 1.

04. Write about process abstraction & process hierarchy.

Process Abstraction:
 Processes are the most fundamental operating system abstraction.
 Processes organize information about other abstractions and represent a single thing that
the computer is “doing.”
 We know processes as applications or programs which are under execution.
 Abstraction means displaying only essential information and hiding the details.
 Process abstraction refers to providing only essential information about the data to the
outside world, hiding the background details or implementation.
 Unlike threads, address spaces and files, processes are not tied to a hardware component.
Instead, they contain other abstractions.
 Processes contain:

 one or more threads,


 an address space, and
 zero or more open file handles representing files.

Process Hierarchy:
 Now-a-days all operating systems permit a user to create and destroy processes.
 A process can create several new processes during its time of execution.
 The creating process is called Parent Process and the new process is called Child Process.
 There are different ways for creating a new process. These are as follows −
 Execution − The child process is executed by the parent process concurrently or it waits till
all children get terminated.
 Sharing − The parent or child process shares all resources like memory or files or children
process shares a subset of parent’s resources or parent and children process share no
resource in common.

Operating Systems – Sudheer Kumar Kasulanati


 The reasons that parent process terminates the execution of one of its children are as
follows : 19
 The child process has exceeded its usage of resources that have been allocated. Because of
this there should be some mechanism which allows the parent process to inspect the state
of its children process.
 The task that is assigned to the child process is no longer required.

05. What are the different types of threads? Explain benefits of threads.

 Threads in Operating System:

 A thread is a single sequential flow of execution of tasks of a process so it is also known as


thread of execution or thread of control.
 There is a way of thread execution inside the process of any operating system.
 Apart from this, there can be more than one thread inside a process.
 Each thread of the same process makes use of a separate program counter and a stack of
activation records and control blocks.
 Thread is often referred to as a lightweight process.

 The process can be split down into so many threads.


 For example, in a browser, many tabs can be viewed as threads. MS Word uses many
threads - formatting text from one thread, processing input from another thread, etc.

Types of Threads:
In the operating system, there are two types of threads.

1. User level thread.


2. Kernel level thread.

Operating Systems – Sudheer Kumar Kasulanati


User-level thread
20
 The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user.
 If a user performs a user-level thread blocking operation, the whole process is blocked.
 The kernel level thread does know nothing about the user level thread.
 The kernel-level thread manages user-level threads as if they are single-threaded
processes?
 Examples: Java thread, POSIX (Portable Operating System Interface) threads, etc.

Advantages of User-level threads


1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.

Disadvantages of User-level threads


1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.

Kernel level thread

 The kernel thread recognizes the operating system.


 There is a thread control block and process control block in the system for each thread and
process in the kernel-level thread.
 The kernel-level thread is implemented by the operating system.
 The kernel knows about all the threads and manages them. The kernel-level thread offers a
system call to create and manage the threads from user-space.
 The implementation of kernel threads is more difficult than the user thread.
 Context switch time is longer in the kernel thread.
 If a kernel thread performs a blocking operation, the Banky thread execution can continue.
Example: Window Solaris.

Operating Systems – Sudheer Kumar Kasulanati


21

Advantages of Kernel-level threads


1. The kernel-level thread is fully aware of all threads.
2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
3. The kernel-level thread is good for those applications that block the frequency.

Disadvantages of Kernel-level threads


1. The kernel thread manages and schedules all threads.
2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.

Components of Threads
Any thread has the following components.
1. Program counter
2. Register set
3. Stack space

Benefits of Threads

 Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases.
 Effective Utilization of Multiprocessor system: When you have more than one thread in
one process, you can schedule more than one thread in more than one processor.
 Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the CPU.
 Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
 Communication: Multi-thread communication is simple because the threads share the
same address space, while in process, we adopt just a few strategies for communication
between two processes.
 Resource sharing: Resources can be shared between all threads within a process.

Operating Systems – Sudheer Kumar Kasulanati


06. Explain threading issues & thread libraries.
22
Threading issues:
There are several threading issues when we are in a multithreading environment.
Threading Issues in OS
 System Calls
 Thread Cancellation
 Signal Handling
 Thread Pool
 Thread Specific Data

1. The fork() and exec() System Calls

 The fork() and exec() are the system calls.


 The fork() call creates a duplicate process of the process that invokes fork().
 The new duplicate process is called child process and process invoking the fork() is called
the parent process.
 Both the parent process and the child process continue their execution from the
instruction that is just after the fork().
 Let us now discuss the issue with the fork() system call. Consider that a thread of the
multithreaded program has invoked the fork(). So, the fork() would create a new duplicate
process. Here the issue is whether the new duplicate process created by fork() will
duplicate all the threads of the parent process or the duplicate process would be single-
threaded.
 Well, there are two versions of fork() in some of the UNIX systems. Either the fork() can
duplicate all the threads of the parent process in the child process or the fork() would only
duplicate that thread from parent process that has invoked it.
 Which version of fork() must be used totally depends upon the application.
 Next system call is exec() system call when invoked replaces the program along with all its
threads with the program that is specified in the parameter to exec(). Typically the exec()
system call is lined up after the fork() system call.
 Here the issue is if the exec() system call is lined up just after the fork() system call then
duplicating all the threads of parent process in the child process by fork() is useless as the
exec() system call will replace the entire process with the process provided to exec() in the
parameter.

Operating Systems – Sudheer Kumar Kasulanati


2. Thread cancellation
23
 Termination of the thread in the middle of its execution it is termed as ‘thread
cancellation’.
 Consider that there is a multithreaded program which has let its multiple threads to search
through a database for some information. However, if one of the thread returns with the
desired result the remaining threads will be cancelled.
 Now a thread which we want to cancel is termed as target thread. Thread cancellation can
be performed in two ways:

 Asynchronous Cancellation: In asynchronous cancellation, a thread is employed to


terminate the target thread instantly.

 Deferred Cancellation: In deferred cancellation, the target thread is scheduled to check


itself at regular interval whether it can terminate itself or not.

 The issue related to the target threads are listed below:

 What if the resources had been allotted to the cancel target thread?
 What if the target thread is terminated when it was updating the data, it was sharing with
some other thread.
 Here the asynchronous cancellation of the thread where a thread immediately cancels the
target thread without checking whether it is holding any resources or not create troubles.

 However, in deferred cancellation, the thread that indicates the target thread about the
cancellation, the target thread crosschecks its flag in order to confirm that it should it be
cancelled immediately or not. The thread cancellation takes place where they can be
cancelled safely such points are termed as cancellation points by Pthreads (POSIX threads).

3. Signal Handling
 Signal handling is more convenient in the single-threaded program as the signal would be
directly forwarded to the process. But when it comes to multithreaded program, the issue
arrives to which thread of the program the signal should be delivered.
 How the signal would be delivered to the thread would be decided, depending upon the
type of generated signal. The generated signal can be classified into two types:
synchronous signal and asynchronous signal.

 If the signal is synchronous it would be delivered to the specific thread causing the
generation of the signal. If the signal is asynchronous it cannot be specified to which
thread of the multithreaded program it would be delivered.

Operating Systems – Sudheer Kumar Kasulanati


4. Thread Pool
24
 Whenever user requests for a webpage to the server, the server creates a separate thread
to service the request.
 Although the server also has some potential issues.
 Consider if we do not have a bound on the number of active threads in a system and
would create a new thread for every new request then it would finally result in exhaustion
of system resources.

 We are also concerned about the time it will take to create a new thread. It must not be
that case that the time require to create a new thread is more than the time required by
the thread to service the request and then getting discarded as it would result in wastage
of CPU time.

 The solution to this issue is the thread pool. The idea is to create a finite amount of
threads when the process starts. This collection of threads is referred to as the thread
pool. The threads stay in the thread pool and wait till they are assigned any request to be
serviced.

5. Thread Specific data

 We all are aware of the fact that the threads belonging to the same process share the data
of that process. Here the issue is what if each particular thread of the process needs its
own copy of data. So the specific data associated with the specific thread is referred to as
thread-specific data.

 Consider a transaction processing system, here we can process each transaction in a


different thread. To determine each transaction uniquely we will associate a unique
identifier with it. Which will help the system to identify each transaction uniquely.

 As we are servicing each transaction in a separate thread. So we can use thread-specific


data to associate each thread to a specific transaction and its unique id. Thread libraries
such as Win32, Pthreads and Java support to thread-specific data.

 So these are threading issues that occur in the multithreaded programming environment.

Operating Systems – Sudheer Kumar Kasulanati


Thread Libraries
25
 A thread library provides the programmer an API for creating and managing threads. There
are two primary ways of implementing a thread library. The first approach is to provide a
library entirely in user space with no kernel support. All code and data structures for the
library exist in user space. This means that invoking a function in the library results in a local
function call in user space and not a system call.
 The second approach is to implement a kernel-level library supported directly by the
operating system. In this case, code and data structures for the library exist in kernel space.
Invoking a function in the API for the library typically results in a system call to the kernel.
 Three main thread libraries are in use today:

1. Pthreads (POSIX Threads)


2. Win32
3. Java. Pthreads,

 The threads extension of the POSIX standard, may be provided as either a user- or kernel-
level library.
 The Win32 thread library is a kernel-level library available on Windows systems.
 The Java thread API allows thread creation and management directly in Java programs.
However, because in most instances the JVM is running on top of a host operating system,
the Java thread API is typically implemented using a thread library available on the host
system.
 This means that on Windows systems, Java threads are typically implemented using the
Win32 API; UNIX and Linux systems often use Pthreads.

Operating Systems – Sudheer Kumar Kasulanati


07. Define process scheduling. Explain process scheduling algorithms.
(Preemptive & Non-preemptive). 26

Operating System Scheduling Algorithms:

 A Process Scheduler schedules different processes to be assigned to the CPU based on


particular scheduling algorithms.
 There are six popular process scheduling algorithms:

 First-Come, First-Served (FCFS) Scheduling


 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
 These algorithms are either non-preemptive or preemptive.
 Non-preemptive algorithms are designed so that once a process enters the running state,
it cannot be preempted until it completes its allotted time.
 The preemptive scheduling is based on priority where a scheduler may preempt a low
priority running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)


 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.
 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Operating Systems – Sudheer Kumar Kasulanati


Wait time of each process is as follows :
27
Process Wait Time : Service Time – Arrival Time
P0 0–0=0
P1 5–1=4

P2 8–2=6

P3 16 – 3 = 13

Average Wait Time: (0 + 4 + 6 + 13) / 4 = 5.75

Shortest Job Next (SJN)


 This is also known as Shortest Job First, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 This is the best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 Given: Table of processes, and their Arrival time, Execution time
Process Arrival Time Execution Time Service Time
P0 0 5 0
P1 1 3 5

P2 2 8 14

P3 3 6 8

Waiting time of each process is as follows :


Process Wait Time : Service Time – Arrival Time
P0 0–0=0
P1 5–1=4

P2 14 – 2 = 12

P3 8–3=5

Average Wait Time: (0 + 4 + 12 + 5) / 4 = 21 / 4 = 5.25

Operating Systems – Sudheer Kumar Kasulanati


Priority Based Scheduling 28

 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and
so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
 Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.
Process Arrival Time Execution Time Priority Service Time
P0 0 5 1 0
P1 1 3 2 11

P2 2 8 1 14

P3 3 6 3 5

Waiting time of each process is as follows :


Process Wait Time : Service Time – Arrival Time
P0 0–0=0
P1 11 – 1 = 10

P2 14 – 2 = 12

P3 5–3=2

Average Wait Time: (0 + 10 + 12 + 2) / 4 = 24 / 4 = 6

Shortest Remaining Time


 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a
newer ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not known.
 It is often used in batch environments where short jobs need to give preference.

Operating Systems – Sudheer Kumar Kasulanati


Round Robin Scheduling 29

 Round Robin is the preemptive process scheduling algorithm.


 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process
executes for a given time period.
 Context switching is used to save states of preempted processes.
Process Arrival Time Execution Time
P0 0 5
P1 1 3

P2 2 8

P3 3 6

Wait time of each process is as follows :


Process Wait Time : Service Time – Arrival Time
P0 (0 – 0) + (12 – 3) = 9
P1 (3 – 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12

P3 (9 - 3) + (17 - 12) = 11

Average Wait Time: ( 9 + 2 + 12 + 11) / 4 = 8.5

Multiple-Level Queues Scheduling


 Multiple-level queues are not an independent scheduling algorithm. They make use of other
existing algorithms to group and schedule jobs with common characteristics.
 Multiple queues are maintained for processes with common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
 For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in
another queue. The Process Scheduler then alternately selects jobs from each queue and
assigns them to the CPU based on the algorithm assigned to the queue.

Operating Systems – Sudheer Kumar Kasulanati


30

Operating Systems – Sudheer Kumar Kasulanati


Unit 3 : Process Management, Deadlocks, 31
Methods for IPC, Process Synchronization problems
01. What is deadlock? Write about deadlock characterization. [or]
Explain necessary & sufficient conditions for deadlocks?

Deadlock in Operating System :

 A process in operating system uses resources in the following way.


1) Requests a resource
2) Use the resource
3) Releases the resource
 Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.
 For example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.

Deadlock Characterization:
 A deadlock happens in operating system when two or more processes need some resource
to complete their execution that is held by the other process.
 A deadlock occurs if the four Coffman conditions hold true. They are given as follows:
 Mutual Exclusion
 Hold & Wait
 No Preemption
 Circular Wait

Operating Systems – Sudheer Kumar Kasulanati


Mutual Exclusion
 There should be a resource that can only be held by one process at a time. 32
 In the diagram below, there is a single instance of Resource 1 and it is held by
Process 1 only.

Hold and Wait


 A process can hold multiple resources and still request more resources from other processes
which are holding them.
 In the diagram given below, Process 2 holds Resource 2 and Resource 3 and is requesting
the Resource 1 which is held by Process 1.

No Preemption
 A resource cannot be preempted from a process by force.
 A process can only release a resource voluntarily.
 In the diagram below, Process 2 cannot preempt Resource 1 from Process 1.
 It will only be released when Process 1 relinquishes it voluntarily after its execution is
complete.

Operating Systems – Sudheer Kumar Kasulanati


Circular Wait
33
 A process is waiting for the resource held by the second process, which is waiting for the
resource held by the third process and so on, till the last process is waiting for a resource
held by the first process.
 This forms a circular chain.
 For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly,
Process 2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait
loop.

02. Write about deadlock handling approaches. [or] Explain: Deadlock Prevention, Deadlock
Avoidance

Methods for handling deadlock


There are mainly four methods for handling deadlock.

1. Deadlock ignorance
 It is the most popular method and it acts as if no deadlock and the user will restart.
 As handling deadlock is expensive to be called of a lot of codes need to be altered which
will decrease the performance so for less critical jobs deadlock are ignored.
 Ostrich algorithm is used in deadlock Ignorance. Used in windows, Linux etc.
2. Deadlock prevention
 It means that we design such a system where there is no chance of having a deadlock.

Operating Systems – Sudheer Kumar Kasulanati


3. Deadlock avoidance
34
 Here whenever a process enters into the system it must declare maximum demand. To the
deadlock problem before the deadlock occurs.
 This approach employs an algorithm to access the possibility that deadlock would occur
and not act accordingly.
 If the necessary condition of deadlock is in place, it is still possible to avoid feedback by
allocating resources carefully.
 A deadlock avoidance algorithm dynamically examines the resources allocation state to
ensure that a circular wait condition case never exists.
 There are 3 states of the system:

Methods for deadlock avoidance

1) Resource allocation graph

 This graph is also kind of graphical bankers' algorithm where a process is denoted by a
circle Pi and resources is denoted by a rectangle Rj.
 Presence of a cycle in the resources allocation graph is necessary but not sufficient
condition for detection of deadlock. If the type of every resource has exactly one copy
than the presence of cycle is necessary as well as sufficient condition for detection of
deadlock.

 This is in unsafe state if P1 request R2 and P2 request R1 then deadlock will occur.

2) Banker’s algorithm

 The resource allocation graph algorithms not applicable to the system with multiple
instances of the type of each resource. So for this system Banker’s algorithm is used.
 Here whenever a process enters into the system it must declare maximum demand
possible.
 At runtime, we maintain some data structure like current allocation, current need, current
available etc.
 Whenever a process requests some resources we first check whether the system is in a
safe state or not.

Operating Systems – Sudheer Kumar Kasulanati


Safety algorithm (Banker’s Algorithm):
35
 This algorithm is used to find whether system is in safe state or not we can find

Algorithm:

Step 1: Work = Available


Step 2: Find an i such that a) Finish[i] = false b) Need <= Work
Step 3: Work = Work + Allocation
Finish[i] = true
goto Step 2
Step 4: if Finish[i] = true for all I, then system is in safe state.

Let's understand it by an example:

Consider the following 3 processes with total resources for A=6, B=5, C=7, D=6

As we can see in above diagram, Available = Total – Allocated


= ( 6 5 7 6 ) – ( 3 4 6 4 ) = ( 3 1 1 2)

Then we check whether the system is in deadlock or not and find the safe sequence of
process.

As per the algorithm Work = Available, i.e Work = ( 3 1 1 2)


For the process P0 : check Need <= Work
( 0 2 0 1 ) <= (3 1 1 2) not satisfied, So P0 should wait.
For the process P1 : check Need <= Work
( 2 1 0 1 ) <= (3 1 1 2) is satisfied, So P1 should be completed.
Now Work = Work + Allocation = ( 3 1 1 2 ) + ( 1 2 2 1 ) = ( 4 3 3 3 )
For the process P0 : check Need <= Work
( 0 2 0 1 ) <= ( 4 3 3 3 ) is satisfied, So P0 should be completed.
Now Work = Work + Allocation = ( 4 3 3 3 ) + ( 1 0 3 3 ) = ( 5 3 6 6 )
For the process P2 : check Need <= Work
( 0 1 4 0 ) <= ( 5 3 6 6 ) is satisfied, So P2 should be completed.
Now Work = Work + Allocation = ( 5 3 6 6 ) + ( 1 2 1 0 ) = ( 6 5 7 6 )

So the system is safe and the safe sequence is P1 → P0 → P3

Operating Systems – Sudheer Kumar Kasulanati


03. Explain: Concurrent Processes or Concurrency.
36

Concurrent Processes or Concurrency :

 It refers to the execution of multiple instruction sequences at the same time.


 It occurs in an operating system when multiple process threads are executing concurrently.
 These threads can interact with one another via shared memory or message passing.
 Concurrency results in resource sharing, which causes issues like deadlocks and resource
scarcity.
 It aids with techniques such as process coordination, memory allocation, and execution
schedule to maximize throughput.

Principles of Concurrency :

 Today's technology, like multi-core processors and parallel processing, allows multiple
processes and threads to be executed simultaneously.
 Multiple processes and threads can access the same memory space, the same declared
variable in code, or even read or write to the same file.
 The amount of time it takes a process to execute cannot be simply estimated, and you
cannot predict which process will complete first, enabling you to build techniques to deal
with the problems that concurrency creates.
 Interleaved and overlapping processes are two types of concurrent processes with the same
problems. It is impossible to predict the relative speed of execution, and the following
factors determine it:

1. The way operating system handles interrupts


2. Other processes' activities
3. The operating system's scheduling policies

Problems in Concurrency :

There are various problems in concurrency. Some of them are as follows:

1. Locating the programming errors

 It's difficult to spot a programming error because reports are usually repeatable due to the
varying states of shared components each time the code is executed.

2. Sharing Global Resources

 Sharing global resources is difficult.


 If two processes utilize a global variable and both alter the variable's value, the order in
which the many changes are executed is critical.

Operating Systems – Sudheer Kumar Kasulanati


3. Locking the channel
37
 It could be inefficient for the OS to lock the resource and prevent other processes from using
it.

4. Optimal Allocation of Resources

 It is challenging for the OS to handle resource allocation properly.

Advantages and Disadvantages of Concurrency in Operating System :

Various advantages and disadvantages of Concurrency in Operating systems are as follows:

Advantages :

1. Better Performance

 It improves the operating system's performance.


 When one application only utilizes the processor, and another only uses the disk drive, the
time it takes to perform both apps simultaneously is less than the time it takes to run them
sequentially.

2. Better Resource Utilization

 It enables resources that are not being used by one application to be used by another.

3. Running Multiple Applications

 It enables you to execute multiple applications simultaneously.

Disadvantages :
 It is necessary to protect multiple applications from each other.
 It is necessary to use extra techniques to coordinate several applications.
 Additional performance overheads and complexities in OS are needed for switching
between applications.

Operating Systems – Sudheer Kumar Kasulanati


04. Write about critical section & Semaphores.
38

Critical Section Problem


 The critical section is a code segment where the shared variables can be accessed.
 An atomic action is required in a critical section i.e. only one process can execute in its critical section
at a time.
 All the other processes have to wait to execute in their critical sections.
 A diagram that demonstrates the critical section is as follows :

 In the above diagram, the entry section handles the entry into the critical section.
 It acquires the resources needed for execution by the process.
 The exit section handles the exit from the critical section. It releases the resources and also informs
the other processes that the critical section is free.

Solution to the Critical Section Problem


 The critical section problem needs a solution to synchronize the different processes. The solution
to the critical section problem must satisfy the following conditions :

Mutual Exclusion
 By Mutual Exclusion, we mean that if one process is executing inside critical section then the other
process must not enter into the critical section.

Operating Systems – Sudheer Kumar Kasulanati


39

Progress

 Progress means that if one process doesn't need to execute into critical section then it should not
stop other processes to get into the critical section.

Bounded Waiting

 We should be able to predict the waiting time for every process to get into the critical section. The
process must not be endlessly waiting for getting into the critical section.

Architectural Neutrality

 Our mechanism must be architectural natural. It means that if our solution is working fine on one
architecture then it should also run on the other ones as well.

Semaphores in Process Synchronization


 Semaphore was proposed by Dijkstra in 1965 which is a very significant technique to
manage concurrent processes by using a simple integer value, which is known as a
semaphore.
 Semaphore is simply an integer variable that is shared between threads.
 This variable is used to solve the critical section problem and to achieve process
synchronization in the multiprocessing environment.
 Semaphores are of two types:

1. Binary Semaphore :
This is also known as mutex lock. It can have only two values : 0 and 1. Its value is
initialized to 1. It is used to implement the solution of critical section problems with
multiple processes.

2. Counting Semaphore :
Its value can range over an unrestricted domain. It is used to control access to a resource
that has multiple instances.

Operating Systems – Sudheer Kumar Kasulanati


Working of Semaphore :
40
First, look at two operations that can be used to access and change the value of the
semaphore variable.

Some points regarding P and V operation :

1. P operation is also called wait, sleep, or down operation, and V operation is also called
signal, wake-up, or up operation.
2. Both operations are atomic and semaphore(s) is always initialized to one. Here atomic
means that variable on which read, modify and update happens at the same time with
no pre-emption i.e. in-between no read, modify and update or other operation is
performed that may change the variable.
3. A critical section is surrounded by both operations to implement process
synchronization. See the below image. The critical section of Process P is in between P
and V operation.

 Now, let us see how it implements mutual exclusion. Let there be two processes P1 and
P2 and a semaphore s is initialized as 1.
 Now if suppose P1 enters in its critical section then the value of semaphore s becomes 0.
Now if P2 wants to enter its critical section then it will wait until s > 0, this can only
happen when P1 finishes its critical section and calls V operation on semaphore s.
 This way mutual exclusion is achieved. Look at the below image for details which is
Binary semaphore.

Operating Systems – Sudheer Kumar Kasulanati


41

Limitations of Semaphores:

1. One of the biggest limitations of semaphore is priority inversion.


2. Deadlock, suppose a process is trying to wake up another process which is not in a sleep
state. Therefore, a deadlock may block indefinitely.
3. The operating system has to keep track of all calls to wait and to signal the semaphore.

05. Write about Inter Process Communication (IPC).

Inter Process Communication

 IPC is a type of mechanism usually provided by the operating system (or OS).
 The main aim or goal of this mechanism is to provide communications in between several
processes.
 In short, the inter communication allows a process letting another process know that some
event has occurred.
 Let us now look at the general definition of inter-process communication, which will explain
the same thing that we have discussed above.

Definition

 ‘Inter-process communication is used for exchanging useful information between numerous


threads in one or more processes (or programs)."

Role of Synchronization in Inter Process Communication

 It is one of the essential parts of inter process communication. Typically, this is provided by
inter process communication control mechanisms, but sometimes it can also be controlled
by communication processes.

Operating Systems – Sudheer Kumar Kasulanati


Following methods that used to provide the synchronization:
42
1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock

Mutual Exclusion:-

 It is required that only one process can enter the critical section at a time.
 This also helps in synchronization and creates a stable state to avoid the race condition.

Semaphore:-

 Semaphore is a type of variable that usually controls the access to the shared resources by
several processes.
 Semaphore is further divided into following two types:

1. Binary Semaphore 2. Counting Semaphore

Barrier:-

 A barrier typically not allows an individual process to proceed unless all the processes does
not reach it.
 It is used by many parallel languages, and collective routines impose barriers.

Spinlock:-

 Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not.
 It is known as busy waiting because even though the process active, the process does not
perform any functional operation (or task).

Operating Systems – Sudheer Kumar Kasulanati


Approaches to Inter process Communication 43

These are a few different approaches for Inter- Process Communication:

Pipes

 The pipe is a type of data channel that is unidirectional in nature.


 It means that the data in this type of data channel can be moved in only a single direction at
a time.
 Typically, it uses the standard methods for input and output. These pipes are used in all
types of POSIX systems and in different versions of window operating systems as well.

Shared Memory

 It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously.
 It is primarily used so that the processes can communicate with each other.
 Therefore the shared memory is used by almost all POSIX and Windows operating systems
as well.

Message Queue

 In general, several different messages are allowed to read and write the data to the message
queue.
 In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them.
 In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.
 To understand the concept of Message queue and Shared memory in more detail, let's take
a look at its diagram given below:

Operating Systems – Sudheer Kumar Kasulanati


Message Passing
44
 It is a type of mechanism that allows processes to synchronize and communicate with each
other.
 However, by using the message passing, the processes can communicate with each other
without restoring the variables.
 Usually, the inter-process communication mechanism provides two operations that are as
follows: send (message) & received (message)

Direct Communication

 In this type of communication process, usually, a link is created or established between two
communicating processes.
 However, in every pair of communicating processes, only one link can exist.

Indirect Communication

 Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links.
 These shared links can be unidirectional or bi-directional.

FIFO:-

 It is a type of general communication between two unrelated processes.


 It can also be considered as full-duplex, which means that one process can communicate
with another process and vice versa.

Need of Inter Process Communication:

 There are numerous reasons to use inter-process communication for sharing the data. Here
are some of the most important reasons that are given below:

 It helps to speedup modularity


 Computational
 Privilege separation
 Convenience
 Helps operating system to communicate with each other and synchronize their actions as
well.

Operating Systems – Sudheer Kumar Kasulanati


06. What is process synchronization? Write about classical problems of process synchronization.
45

Process Synchronization :

 When two or more process cooperates with each other, their order of execution must be
preserved otherwise there can be conflicts in their execution and inappropriate outputs can
be produced.
 A cooperative process is the one which can affect the execution of other process or can be
affected by the execution of other process. Such processes need to be synchronized so that
their order of execution can be guaranteed.
 The procedure involved in preserving the appropriate order of execution of cooperative
processes is known as Process Synchronization.
 There are various synchronization mechanisms that are used to synchronize the processes.

Race Condition :

 A Race Condition typically occurs when two or more threads try to read, write and possibly
make the decisions based on the memory that they are accessing concurrently.

Critical Section

 The regions of a program that try to access shared resources and may cause race conditions
are called critical section.
 To avoid race condition among the processes, we need to assure that only one process at a
time can execute within the critical section.

The classical problems of synchronization are as follows:


1. Producer–Consumer problem [or] Bounded-Buffer problem
2. Readers and writers problem

Producer-Consumer problem
 Also known as the Bound-Buffer problem. In this problem, there is a buffer of n
slots, and each buffer is capable of storing one unit of data.
 There are two processes that are operating on the buffer – Producer and Consumer.
The producer tries to insert data and the consumer tries to remove data.
 If the processes are run simultaneously they will not yield the expected output.
 The solution to this problem is creating two semaphores, one full and the other
empty to keep a track of the concurrent processes.

Operating Systems – Sudheer Kumar Kasulanati


Producer Consumer Problem Solution using Semaphores :
Problem Statement: 46
 We have a buffer of fixed size.
 A producer can produce an item and can place in the buffer.
 A consumer can pick items and can consume them.
 We need to ensure that when a producer is placing an item in the buffer, then at the
same time consumer should not consume any item.
 In this problem, buffer is the critical section.
 To solve this problem, we need two counting semaphores – Full and Empty.
 “Full” keeps track of number of items in the buffer at any given time and “Empty” keeps
track of number of unoccupied slots.
Initialization of semaphores:

mutex =1
Full =0 // Initially, all slots are empty. Thus full slots are 0
Empty =n // All slots are empty initially

Solution for Producer –

do{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
}while(true);

 When producer produces an item then the value of “empty” is reduced by 1 because one
slot will be filled now.
 The value of mutex is also reduced to prevent consumer to access the buffer.
 Now, the producer has placed the item and thus the value of “full” is increased by 1. The
value of mutex is also increased by 1 because the task of producer has been completed
and consumer can access the buffer.
Solution for Consumer –
do{
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
// consumes item
}while(true);

Operating Systems – Sudheer Kumar Kasulanati


 As the consumer is removing an item from buffer, therefore the value of “full” is reduced
by 1 and the value is mutex is also reduced so that the producer cannot access the buffer 47
at this moment.
 Now, the consumer has consumed the item, thus increasing the value of “empty” by 1.
The value of mutex is also increased so that producer can access the buffer now.

Readers and Writers Problem

 This problem occurs when many threads of execution try to access the same shared
resources at a time.
 Some threads may read, and some may write.
 If one of the people tries editing the file, no other person should be reading or writing at
the same time, otherwise changes will not be visible to him/her.
 However if some person is reading the file, then others may read it at the same time.
 Precisely in OS we call this situation as the readers-writers problem.

Problem parameters:

 One set of data is shared among a number of processes


 Once a writer is ready, it performs its write. Only one writer may write at a time
 If a process is writing, no other process can read it
 If at least one reader is reading, no other process can write
 Readers may not write and only read.

Solution:
Writer process:

1. Writer requests the entry to critical section.


2. If allowed i.e. wait() gives a true value, it enters and performs the write. If not allowed, it
keeps on waiting.
3. It exits the critical section.

do {
// writer requests for critical section
wait(wrt);
// performs the write
// leaves the critical section
signal(wrt);
} while(true);

Operating Systems – Sudheer Kumar Kasulanati


Reader process:
48
1. Reader requests the entry to critical section.
2. If allowed:
 it increments the count of number of readers inside the critical section. If this reader is
the first reader entering, it locks the wrt semaphore to restrict the entry of writers if any
reader is inside.
 It then, signals mutex as any other reader is allowed to enter while others are already
reading.
 After performing reading, it exits the critical section. When exiting, it checks if no more
reader is inside, it signals the semaphore “wrt” as now, writer can enter the critical
section.
3. If not allowed, it keeps on waiting.

do {
// Reader wants to enter the critical section
wait(mutex);
// The number of readers has now increased by 1
readcnt++;
// there is atleast one reader in the critical section
// this ensure no writer can enter if there is even one reader
// thus we give preference to readers here
if (readcnt==1)
wait(wrt);
// other readers can enter while this current reader is inside
// the critical section
signal(mutex);
// current reader performs reading here
wait(mutex); // a reader wants to leave
readcnt--;
// that is, no reader is left in the critical section,
if (readcnt == 0)
signal(wrt); // writers can enter
signal(mutex); // reader leaves
} while(true);

Operating Systems – Sudheer Kumar Kasulanati


Unit 4 : Memory Management, Paging, 49

Segmentation, Virtual Memory


01. Explain Physical & Virtual Address Space.

Virtual and Physical Address

 The addresses identify a location in the memory where the actual code resides in the system
in the operating system.
 We store the data in the memory at different locations with addresses to access the data
again whenever required in the future.
 There are two types of addresses used for memory in the operating system, i.e., the physical
address and logical address.
 The logical address is a virtual address viewed by the user. The user can't view the physical
address directly.
 The logical address is used as a reference to access the physical address.
 The fundamental difference between logical and physical addresses is that the CPU
generates the logical address during program execution. In contrast, the physical address
refers to a location in the memory unit.

What is a Logical Address?

 A logical address is an address that is generated by the CPU during program execution. The
logical address is a virtual address as it does not exist physically, and therefore, it is also
known as a Virtual Address.
 This address is used as a reference to access the physical memory location by CPU.
 The term Logical Address Space is used to set all logical addresses generated from a
program's perspective.
 A logical address usually ranges from zero to maximum (max). The user program that
generates the logical address assumes that the process runs on locations between 0 and
max. This logical address (generated by CPU) combines with the base address generated
by the MMU to form the physical address.
 The hardware device called Memory-Management Unit is used for mapping logical
addresses to their corresponding physical address.

What is a Physical Address?

 The physical address identifies the physical location of required data in memory.
 The user never directly deals with the physical address but can access it by its corresponding
logical address.
 The user program generates the logical address and thinks it is running in it, but the program
needs physical memory for its execution.
 Therefore, the logical address must be mapped to the physical address by MMU before they
are used.

Operating Systems – Sudheer Kumar Kasulanati


 The Physical Address Space is used for all physical addresses corresponding to the logical
addresses in a logical address space. 50

Difference between Logical and Physical Address

 The basic difference between Logical and physical addresses is that The CPU generates a
logical address from a program's perspective.
 In contrast, the physical address is a location that exists in the memory unit.
 Logical Address Space is the set of all logical addresses generated by the CPU for a program.
 In contrast, all physical addresses mapped to corresponding logical addresses are called
Physical Address Space.
 The logical address does not exist physically in the memory, whereas a physical address is a
location in the memory that can be accessed physically.

 Identical logical addresses are generated by Compile-time and Load time address binding
methods, whereas they differ in the run-time address binding method.
 The CPU generates the logical address while the program is running, whereas the physical
address is computed by the Memory Management Unit (MMU).
 There are some other differences between the logical and physical addresses, and let's
discuss them with the help of the below comparison table.

Operating Systems – Sudheer Kumar Kasulanati


Mapping Virtual Addresses to Physical Addresses
51
 Memory consists of large array addresses. It is the responsibility of the CPU to fetch the
instruction address from the program counter.
 These instructions may cause loading or storage to a specific memory address.

 Address binding is the process of mapping from one address space to another address
space.
 Logical addresses are generated by the CPU during execution, whereas physical address
refers to the location in a physical memory unit (the one loaded into memory).
 Note that users deal only with logical addresses. The MMU translates the logical address.
The output of this process is the appropriate physical address of the data in RAM.
 An address binding can be done in three different ways:
 Compile Time: An absolute address can be generated if you know where a process will
reside in memory at compile time. That is, a physical address is generated in the program
executable during compilation.
 Loading such an executable into memory is very fast.
 But if another process occupies the generated address space, then the program crashes, and
it becomes necessary to recompile the program to use virtual address space.
 Load Time: If it is not known at the compile time where the process will reside, then
relocated addresses will be generated.
 The loader translates the relocated address to an absolute address. The base address of the
process in the main memory is added to all logical addresses by the loader to generate the
absolute address.
 If the base address of the process changes, then we need to reload the process again.
 Execution Time: The instructions are already loaded into memory and are processed by the
CPU. Additional memory may be allocated or reallocated at this time.
 This process is used if the process can be moved from one memory to another during
execution (dynamic linking done during load or run time).

Operating Systems – Sudheer Kumar Kasulanati


Memory Management Unit (MMU):
52
 The run-time mapping between the virtual and physical addresses is done by a hardware
device known as MMU.
 The operating system will handle the processes and move the processes between disk and
memory in memory management.
 It keeps track of available and used memory. The Memory Management Unit is a
combination of these two registers,

1. Base Register: It contains the starting physical address of the process.


2. Limit Register: It mentions the limit relative to the base address on the region occupied by
the process.

02. Write about Memory allocation strategies (fixed & variable partitions).

Memory Allocation :
 Memory allocation is an action of assigning the physical or the virtual memory address
space to a process (its instructions and data). The two fundamental methods of memory
allocation are static and dynamic memory allocation.
 Static memory allocation method assigns the memory to a process, before its execution.
On the other hand, the dynamic memory allocation method assigns the memory to a
process, during its execution.

Fixed Partitioning and Variable Partitioning:

Fixed Partitioning :

 Multi-programming with fixed partitioning is a contiguous memory management


technique in which the main memory is divided into fixed sized partitions which can be
of equal or unequal size.
 Whenever we have to allocate a process memory then a free partition that is big enough
to hold the process is found, then the memory is allocated to the process.
 If there is no free space available, then the process waits in the queue to be allocated
memory. It is one of the most oldest memory management technique which is easy to
implement.

Operating Systems – Sudheer Kumar Kasulanati


53
Variable Partitioning :

 Multi-programming with variable partitioning is a contiguous memory management


technique in which the main memory is not divided into partitions and the process is
allocated a chunk of free memory that is big enough for it to fit.
 The space which is left is considered as the free space which can be further used by other
processes.
 It also provides the concept of compaction. In compaction the spaces that are free and
the spaces which not allocated to the process are combined and single large memory
space is made.

Difference between Fixed Partitioning and Variable Partitioning :

Operating Systems – Sudheer Kumar Kasulanati


03. Write about Paging & Segmentation.
54

What is Paging in OS?


 Paging is a storage mechanism that allows OS to retrieve processes from the secondary
storage into the main memory in the form of pages.
 In the Paging method, the main memory is divided into small fixed-size blocks of physical
memory, which is called frames.
 The size of a frame should be kept the same as that of a page to have maximum utilization
of the main memory.
 Paging is used for faster access to data, and it is a logical concept.

Example of Paging in OS
 For example, if the main memory size is 16 KB and Frame size is 1 KB. Here, the main
memory will be divided into the collection of 16 frames of 1 KB each.
 There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4 KB each. Here,
all the processes are divided into pages of 1 KB each so that operating system can store
one page in one frame.
 At the beginning of the process, all the frames remain empty so that all the pages of the
processes will get stored in a contiguous way.

 In this example you can see that A2 and A4 are moved to the waiting state after some
time. Therefore, eight frames become empty, and so other pages can be loaded in that
empty blocks. The process A5 of size 8 pages (8 KB) are waiting in the ready queue.

Operating Systems – Sudheer Kumar Kasulanati


55

 In this example, you can see that there are eight non-contiguous frames which is available
in the memory, and paging offers the flexibility of storing the process at the different
places. This allows us to load the pages of process A5 instead of A2 and A4.

Advantages of Paging method:

 Easy to use memory management algorithm


 No need for external Fragmentation
 Swapping is easy between equal-sized pages and page frames.

Disadvantages of Paging method:

 May cause Internal fragmentation


 Page tables consume additional memory.
 Multi-level paging may lead to memory reference overhead.

Segmentation in Operating System


 A process is divided into Segments. The chunks that a program is divided into which are
not necessarily all of the same sizes are called segments.
 Segmentation gives user’s view of the process which paging does not give.
 Here the user’s view is mapped to physical memory.
Following are the types of segmentation:

 Virtual memory segmentation –


Each process is divided into a number of segments, not all of which are resident at any
one point in time.
 Simple segmentation –
Each process is divided into a number of segments, all of which are loaded into memory
at run time, though not necessarily contiguously.

Operating Systems – Sudheer Kumar Kasulanati


 There is no simple relationship between logical addresses and physical addresses in
segmentation. A table stores the information about all such segments and is called 56
Segment Table.
Segment Table – It maps two-dimensional Logical address into one-dimensional Physical
address. It’s each table entry has:
 Base Address: It contains the starting physical address where the segments reside in
memory.
 Limit: It specifies the length of the segment.

Operating Systems – Sudheer Kumar Kasulanati


Translation of Two dimensional Logical Address to one dimensional Physical
Address. 57

 Address generated by the CPU is divided into:


 Segment number (s): Bits required to represent the segment.
 Segment offset (d): Bits required to represent the size of the segment.

Advantages of Segmentation :
 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.

Disadvantage of Segmentation –
 As processes are loaded and removed from the memory, the free memory space is
broken into little pieces, causing External fragmentation.

Operating Systems – Sudheer Kumar Kasulanati


04. Write about Virtual Memory.
58
Virtual Memory in Operating System

 Virtual Memory is a storage allocation scheme in which secondary memory can be


addressed as though it were part of the main memory.
 The addresses a program may use to reference memory are distinguished from the
addresses the memory system uses to identify physical storage sites, and program-
generated addresses are translated automatically to the corresponding machine
addresses.
 The size of virtual storage is limited by the addressing scheme of the computer system
and the amount of secondary memory is available not by the actual number of the main
storage locations.
 It is a technique that is implemented using both hardware and software. It maps memory
addresses used by a program, called virtual addresses, into physical addresses in
computer memory.
 All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time.
 This means that a process can be swapped in and out of the main memory such that it
occupies different places in the main memory at different times during the course of
execution.
 A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution. The combination of dynamic
run-time address translation and use of page or segment table permits this.
 If these characteristics are present then, it is not necessary that all the pages or
segments are present in the main memory during execution.
 This means that the required pages need to be loaded into memory whenever required.
 Virtual memory is implemented using Demand Paging or Demand Segmentation.

 Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is
known as demand paging.
The process includes the following steps :
1. If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the
OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address space.
The page replacement algorithms are used for the decision-making of replacing the page
in physical address space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place the
process back into the ready state.

Operating Systems – Sudheer Kumar Kasulanati


 Hence whenever a page fault occurs these steps are followed by the operating system
and the required page is brought into memory. 59

 As shown in the above diagram, A program with 8 pages is stored in the disk. When this
program is executed, only 3 pages (A, C & F) are loaded into the physical memory in the
frames 4, 6 & 9.
 Whenever another page wants to enter into the physical memory, already entered pages
will be replaced using page replacement algorithms.

Operating Systems – Sudheer Kumar Kasulanati


05. Explain Page Replacement Algorithms.
60
Page Replacement Algorithms in Operating Systems
In an operating system that uses paging for memory management, a page replacement algorithm
is needed to decide which page needs to be replaced when new page comes in.

Page Fault – A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space, but not loaded in physical memory.

Since actual physical memory is much smaller than virtual memory, page faults happen. In case of
page fault, Operating System might have to replace one of the existing pages with the newly
needed page. Different page replacement algorithms suggest different ways to decide which page
to replace. The target for all algorithms is to reduce the number of page faults.

Page Replacement Algorithms :

1. First In First Out (FIFO) –


This is the simplest page replacement algorithm. In this algorithm, the operating system keeps
track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a
page needs to be replaced page in the front of the queue is selected for removal.
Example-1 Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames. Find number of page
faults.

Initially all slots are empty,


so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —>1 Page
Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —>1 Page
Fault.
Finally when 3 come it is not available so it replaces 0  1 page fault

Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults when
increasing the number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots,
we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.

Operating Systems – Sudheer Kumar Kasulanati


2. Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest duration of time in 61
the future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4 page frame. Find
number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
when 0 came, it is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time in
the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot
know future requests. The use of Optimal Page replacement is to set up a benchmark so that
other replacement algorithms can be analysed against it.

3. Least Recently Used –


In this algorithm page will be replaced which is least recently used.
Example-3 Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4 page frames.
Find number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.

Operating Systems – Sudheer Kumar Kasulanati


62

Operating Systems – Sudheer Kumar Kasulanati


Unit 5 : File & I/O Management, 63

Disk Scheduling Algorithms


01. Write short note on Directory Structure.

Directory structure:

 Directory can be defined as the listing of the related files on the disk. The directory may store some
or the entire file attributes.
 To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes. The partitions are also called volumes or
mini disks.
 Each partition must have at least one directory in which, all the files of the partition can be listed.
 A directory entry is maintained for each file in the directory which stores all the information related
to that file.

 A directory is a file which contains the Meta data (data about data) of the bunch of files.
 Every Directory supports a number of common operations on the file:

 File Creation : Creating a file in the directory (folder).


 Search for the file : Search for the file in the directory.
 File deletion : Deleting a file from the directory.
 Renaming the file : Change the file name.
 Traversing Files : Visiting all files in the directory.
 Listing of files : Display all files in the directory.

Operating Systems – Sudheer Kumar Kasulanati


02. Explain File Allocation Methods.
64
File Allocation Methods
 There are different kinds of methods that are used to allocate disk space. We must select
the best method for the file allocation because it will directly affect the system
performance and system efficiency. With the help of the allocation method, we can utilize
the disk, and also files can be accessed.
 There are different types of file allocation methods, but we mainly use three types of file
allocation methods:

1. Contiguous allocation
2. Linked list allocation
3. Indexed allocation

These methods provide quick access to the file blocks and also the utilization of disk space
in an efficient manner.

Contiguous Allocation:
 Contiguous allocation is one of the most used methods for allocation. Contiguous
allocation means we allocate the block in such a manner, so that in the hard disk, all the
blocks get the contiguous physical block.

 We can see in the below figure that in the directory, we have three files. In the table, we
have mentioned the starting block and the length of all the files. We can see in the table
that for each file, we allocate a contiguous block.

Operating Systems – Sudheer Kumar Kasulanati


Example of contiguous allocation
 We can see in the given diagram, that there is a file. The name of the file is ‘mail.’ The file starts 65
from the 19th block and the length of the file is 6. So, the file occupies 6 blocks in a contiguous
manner. Thus, it will hold blocks 19, 20, 21, 22, 23, 24.

Linked List Allocation


 This allocation method overcomes the drawbacks of the contiguous allocation method.
 In this file allocation method, each file is treated as a linked list of disks blocks.
 In the linked list allocation method, it is not required that disk blocks assigned to a specific file are
in the contiguous order on the disk.
 The directory entry comprises of a pointer for starting file block and also for the ending file block.
 Each disk block that is allocated or assigned to a file consists of a pointer, and that pointer point
the next block of the disk, which is allocated to the same file.

Example of linked list allocation

 We can see in the below figure that we have a file named ‘jeep.’
 The value of the start is 9. So, we have to start the allocation from the 9 th block, and blocks are
allocated in a random manner.
 The value of the end is 25. It means the allocation is finished on the 25th block.
 We can see in the below figure that the block (25) comprised of -1, which means a null pointer,
and it will not point to another block.

Operating Systems – Sudheer Kumar Kasulanati


66

Indexed Allocation
 In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file.
 Each file has its own index block. The i th entry in the index block contains the disk address of the
ith file block.
 The directory entry contains the address of the index block as shown in the image:

Operating Systems – Sudheer Kumar Kasulanati


03. Write about Device management, Pipe() system call, Buffering & Shared Memory.
67
Device Management in Operating System
 Device management means controlling the Input/Output devices like disk, microphone, keyboard,
printer, magnetic tape, USB ports, scanner, other accessories, and supporting units like control
channels.
 A process may require various resources, including main memory, file access, and access to disk
drives, and others.
 If resources are available, they could be allocated, and control returned to the CPU. Otherwise,
the procedure would have to be postponed until resources become available.
 The system has multiple devices, and in order to handle these physical or virtual devices, the
operating system requires a separate program known as device controller. It also determines
whether the requested device is available.
 The fundamentals of I/O devices may be divided into three categories:
1. Boot Device
2. Character Device
3. Network Device

Boot Device
It stores data in fixed-size blocks, each with its unique address. For example- Disks.

Character Device
It transmits or accepts a stream of characters, none of which can be addressed individually. For
instance, keyboards, printers, etc.

Network Device
It is used for transmitting the data packets.

Functions of the device management


 The operating system (OS) handles communication with the devices via their drivers. The OS
component gives a uniform interface for accessing devices with various physical features. There are
various functions of device management in the operating system. Some of them are as follows:
1. It keeps track of data, status, location, uses, etc. The file system is a term used to define a group of
facilities.
2. It enforces the pre-determined policies and decides which process receives the device when and for
how long.
3. It improves the performance of specific devices.
4. It monitors the status of every device, including printers, storage drivers, and other devices.
5. It allocates and effectively deallocates the device. De-allocating differentiates the devices at two
levels: first, when an I/O command is issued and temporarily freed. Second, when the job is
completed, and the device is permanently release.

Operating Systems – Sudheer Kumar Kasulanati


Features of Device Management
68
Various features of the device management are as follows:

1. The OS interacts with the device controllers via the device drivers while allocating the device to the
multiple processes executing on the system.
2. Device drivers can also be thought of as system software programs that bridge processes and device
controllers.
3. The device management function's other key job is to implement the API.
4. Device drivers are software programs that allow an operating system to control the operation of
numerous devices effectively.
5. The device controller used in device management operations mainly contains three registers:
command, status, and data.

pipe() System call


 A pipe is a connection between two processes, such that the standard output from one process
becomes the standard input of the other process.
 In UNIX Operating System, Pipes are useful for communication between related processes(inter-
process communication).
 Pipe is one-way communication only i.e we can use a pipe such that One process write to the
pipe, and the other process reads from the pipe. It opens a pipe, which is an area of main
memory that is treated as a “virtual file”.
 The pipe can be used by the creating process, as well as all its child processes, for reading and
writing. One process can write to this “virtual file” or pipe and another related process can read
from it.
 If a process tries to read before something is written to the pipe, the process is suspended until
something is written.
 The pipe system call finds the first two available positions in the process’s open file table and
allocates them for the read and write ends of the pipe.

Operating Systems – Sudheer Kumar Kasulanati


Buffering in Operating System
69
 The buffer is an area in the main memory used to store or hold the data temporarily. In other
words, buffer temporarily stores data transmitted from one place to another, either between two
devices or an application. The act of storing data temporarily in the buffer is called buffering.
 Most buffers are implemented in software, which typically uses the faster RAM to store
temporary data due to the much faster access time than hard disk drives. Buffers are typically
used when there is a difference between the rate of received data and the rate of processed data,
for example, in a printer spooler or online video streaming.
 A buffer often adjusts timing by implementing a queue or FIFO algorithm in memory,
simultaneously writing data into the queue at one rate and reading it at another rate.

Types of Buffering
 There are three main types of buffering in the operating system, such as:

1. Single Buffer

 In Single Buffering, only one buffer is used to transfer the data between two devices.
 The producer produces one block of data into the buffer.
 After that, the consumer consumes the buffer.
 Only when the buffer is empty, the processor again produces the data.

Operating Systems – Sudheer Kumar Kasulanati


2. Double Buffer
70
 In Double Buffering, two schemes or two buffers are used in the place of one. In this buffering,
the producer produces one buffer while the consumer consumes another buffer simultaneously.
So, the producer not needs to wait for filling the buffer. Double buffering is also known as buffer
swapping.

3. Circular Buffer

 When more than two buffers are used, the buffers' collection is called a circular buffer. Each buffer
is being one unit in the circular buffer. The data transfer rate will increase using the circular buffer
rather than the double buffering.

IPC through Shared Memory


 Shared memory is a memory shared between two or more processes. Each process has its own
address space; if any process wants to communicate with some information from its own address
space to other processes, then it is only possible with IPC (inter-process communication)
techniques.
 Shared memory is the fastest inter-process communication mechanism. The operating system
maps a memory segment in the address space of several processes to read and write in that
memory segment without calling operating system functions.

Operating Systems – Sudheer Kumar Kasulanati


 For applications that exchange large amounts of data, shared memory is far superior to message
passing techniques like message queues, which require system calls for every data exchange. To use 71
shared memory, we have to perform two basic steps:
 Request a memory segment that can be shared between processes to the operating system.
 Associate a part of that memory or the whole memory with the address space of the calling
process.
 A shared memory segment is a portion of physical memory that is shared by multiple processes. In
this region, processes can set up structures, and others may read/write on them. When a shared
memory region is established in two or more processes, there is no guarantee that the regions will
be placed at the same base address. Semaphores can be used when synchronization is required.

 For example, one process might have the shared region starting at address 0x60000 while the
other process uses 0x70000. It is critical to understand that these two addresses refer to the exact
same piece of data. So storing the number 1 in the first process's address 0x60000 means the
second process has the value of 1 at 0x70000. The two different addresses refer to the exact same
location.

Functions of IPC Using Shared Memory


 Two functions shmget() and shmat() are used for IPC using shared memory. shmget() function is
used to create the shared memory segment, while the shmat() function is used to attach the shared
segment with the process's address space.

shmget() Function

 The first parameter specifies the unique number (called key) identifying the shared segment. The
second parameter is the size of the shared segment, e.g., 1024 bytes or 2048 bytes. The third
parameter specifies the permissions on the shared segment.
 On success, the shmget() function returns a valid identifier, while on failure, it returns -1.
 Syntax

#include <sys/ipc.h>
#include <sys/shm.h>
int shmget (key_t key, size_t size, int shmflg);

Operating Systems – Sudheer Kumar Kasulanati


shmat() Function
72
 shmat() function is used to attach the created shared memory segment associated with the shared
memory identifier specified by shmid to the calling process's address space.
 The first parameter here is the identifier which the shmget() function returns on success. The second
parameter is the address where to attach it to the calling process.
 A NULL value of the second parameter means that the system will automatically choose a suitable
address.
 The third parameter is '0' if the second parameter is NULL. Otherwise, the value is specified by
SHM_RND.

#include <sys/types.h>
#include <sys/shm.h>
void *shmat(int shmid, const void *shmaddr, int shmflg);

04. Explain: Disk Scheduling Algorithms.

Disk Scheduling Algorithms:


 'Disk Scheduling Algorithm' is an algorithm that keeps and manages input and output requests
arriving for the disk in a system.
 For executing any process, memory is required.
 When it comes to accessing things from a hard disk, the process becomes very slow as a hard disk
is the slowest part of our computer. There are various methods by which the process can be
scheduled and can be done efficiently.

Importance of Disk Scheduling Algorithms:


 In our system, multiple requests are coming to the disk simultaneously which will make a queue of
requests. This queue of requests will result in an increased waiting time of requests.
 The requests wait until the under processing request executes.
 To overcome this queuing and manage the timing of these requests, 'Disk Scheduling' is important
in our Operating System.
Types:
 We have various types of Disk Scheduling Algorithms available in our system as shown in below
figure.
 Each one has its own capabilities and weak points.

Operating Systems – Sudheer Kumar Kasulanati


73

1. FCFS disk scheduling algorithm-

It stands for 'first-come-first-serve'. As the name suggests, the request that comes first will be
processed first and so on. The requests coming to the disk are arranged in a proper sequence as
they arrive. Since every request is processed in this algorithm, so there is no chance of 'starvation'.

Example: Suppose a disk having 200 tracks (0-199). The request


sequence(82,170,43,140,24,16,190) of the disk is shown as in the given figure and the head start is
at request 50.

Explanation: In the above image, we can see the head starts at position 50 and moves to request
82. After serving them the disk arm moves towards the second request which is 170 and then to the
request 43 and so on. In this algorithm,, the disk arm will serve the requests in arriving order. In this
way, all the requests are served in arriving order until the process executes.

"Seek time" will be calculated by adding the head movement differences of all the requests:

Seek time= "(82-50) + (170-82) + (170-43) + (140-43) + (140-24) + (24-16) + (190-16) = 642

Operating Systems – Sudheer Kumar Kasulanati


2. SSTF disk scheduling algorithm-
74
It stands for 'Shortest seek time first'. As the name suggests, it searches for the request having the
least 'seek time' and executes them first. This algorithm has less 'seek time' as compared to the
FCFS Algorithm.

Example: Suppose a disk has 200 tracks (0-199). The request sequence(82,170,43,140,24,16,190)
are shown in the given figure and the head position is at 50.

Explanation: The disk arm searches for the request which will have the least difference in head
movement. So, the least difference is (50-43). Here the difference is not about the shortest value
but it is about the shortest time the head will take to reach the nearest next request. So, after 43,
the head will be nearest to 24, and from here the head will be nearest to request 16, After 16, the
nearest request is 82, so the disk arm will move to serve to request 82 and so on.

Hence, Calculation of Seek Time = (50-43) + (43-24) + (24-16) + (82-16) + (140-82) + (170-140) +
(190-170) = 208

3. SCAN disk scheduling algorithm:

In this algorithm, the head starts to scan all the requests in a direction and reaches the end of the
disk. After that, it reverses its direction and starts to scan again the requests in its path and serves
them. Due to this feature, this algorithm is also known as the "Elevator Algorithm".

Example: Suppose a disk has 200 tracks (0-199). The request sequence(82,170,43,140,24,16,190) is
shown in the given figure and the head position is at 50. The 'disk arm' will first move to the larger
values.

Operating Systems – Sudheer Kumar Kasulanati


Explanation: In the above image, we can see that the disk arm starts from position 50 and goes in a
single direction until it reaches the end of the disk i.e.- request position 199. After that, it reverses 75
and starts servicing in the opposite direction until reaches the other end of the disk. This process
keeps going on until the process is executed.

Hence, the Calculation of 'Seek Time' will be like: (199-50) + (199-16) =332

4. C-SCAN disk scheduling algorithm:

It stands for "Circular-Scan". This algorithm is almost the same as the Scan disk algorithm but one
thing that makes it different is that 'after reaching the one end and reversing the head direction, it
starts to come back. The disk arm moves toward the end of the disk and serves the requests coming
into its path.

After reaching the end of the disk it reverses its direction and again starts to move to the other end
of the disk but while going back it does not serve any requests.

Example: Suppose a disk having 200 tracks (0-199). The request sequence(82,170,43,140,24,16,190)
are shown in the given figure and the head position is at 50.

Explanation: In the above figure, the disk arm starts from position 50 and reached the end(199),
and serves all the requests in the path. Then it reverses the direction and moves to the other end of
the disk i.e.- 0 without serving any task in the path.

After reaching 0, it will again go move towards the largest remaining value which is 43. So, the head
will start from 0 and moves to request 43 serving all the requests coming in the path. And this
process keeps going.

Hence, Seek Time will be =(199−50)+(199−0)+(43−0)=391=(199−50)+(199−0)+(43−0)=391

5. LOOK the disk scheduling algorithm:

In this algorithm, the disk arm moves to the 'last request' present and services them. After reaching
the last requests, it reverses its direction and again comes back to the starting point. It does not go
to the end of the disk, in spite, it goes to the end of requests.

Operating Systems – Sudheer Kumar Kasulanati


Example a disk having 200 tracks (0-199). The request sequence(82,170,43,140,24,16,190) are
shown in the given figure and the head position is at 50. 76

Explanation: The disk arm is starting from 50 and starts to serve requests in one direction only but
in spite of going to the end of the disk, it goes to the end of requests i.e.-190. Then comes back to
the last request of other ends of the disk and serves them. And again starts from here and serves till
the last request of the first side. Hence, Seek time =(190-50) + (190-16) =314

6. C-LOOK disk scheduling algorithm:

The C-Look algorithm is almost the same as the Look algorithm. The only difference is that after
reaching the end requests, it reverses the direction of the head and starts moving to the initial
position. But in moving back, it does not serve any requests.

Example: Suppose a disk having 200 tracks (0-199). The request sequence(82,170,43,140,24,16,190)
are shown in the given figure and the head position is at 50.

Explanation: The disk arm starts from 50 and starts to serve requests in one direction only but in
spite of going to the end of the disk, it goes to the end of requests i.e.-190. Then comes back to the
last request of other ends of a disk without serving them. And again starts from the other end of
the disk and serves requests of its path.
Hence, Seek Time =(190−50)+(190−16)+(43−16)=341=(190−50)+(190−16)+(43−16)=341

Operating Systems – Sudheer Kumar Kasulanati


77

KRISHNA UNIVERSITY
B.Sc DEGREE (CBCS) EXAMINATION
(Examination at the end of Third Semester)
OPERATING SYSTEMS
Model Paper 1

Time : Three hours Maximum : 70 marks

SECTION A – (5 x 4 = 20 marks)
Answer any FIVE of the following questions.

1. Define operating system. Give examples.


2. Write functions of OS.
3. Explain ‘Resource Abstraction” in OS.
4. Explain system calls & system programs.
5. Differences b/w user mode & kernel mode.
6. Explain different types of threads.
7. Explain Page Replacement Algorithms.
8. Write short note on Directory Structure.

SECTION B – (5 x 10 = 50 marks)
Answer following questions.

UNIT 1 UNIT IV

9. Write about evolution of operating systems. 15. Write about Paging & Segmentation.
(or) (or)
10. Explain different types of operating systems. 16. Write about Virtual Memory.

UNIT II UNIT V

11. Explain threading issues & thread libraries. 17. Explain File Allocation Methods.
(or) (or)
12. Explain process scheduling algorithms. 18. Disk scheduling algorithms.

UNIT III

13. Write problems of process synchronization.


(or)
14. Write about Inter Process Communication.

Operating Systems – Sudheer Kumar Kasulanati


78

KRISHNA UNIVERSITY
B.Sc DEGREE (CBCS) EXAMINATION
(Examination at the end of Third Semester)
OPERATING SYSTEMS
Model Paper 2

Time : Three hours Maximum : 70 marks

SECTION A – (5 x 4 = 20 marks)
Answer any FIVE of the following questions.

1. User view & system view of the process &


resources.
2. Write about process abstraction.
3. Explain ‘Concurrent processes’.
4. Write about critical section & semaphores.
5. Physical & Virtual address space.
6. Explain Memory allocation strategies.
7. Explain Buffer & Shared memory.
8. Write about device management.

SECTION B – (5 x 10 = 50 marks)
Answer following questions.

UNIT 1 UNIT IV

9. Write about History of operating systems. 15. Write about Paging & Segmentation.
(or) (or)
10. Functions of operating systems. 16. Write about Page Replacement Algorithms.

UNIT II UNIT V

11. Explain about different types of threads. 17. Explain Contiguous & Linked list File Allocation
(or) Methods.
12. Explain preemptive & non preemptive (or)
scheduling algorithms. 18. Explain FCFS & SSTF scheduling algorithms.

UNIT III

13. Explain necessary and sufficient conditions for


deadlocks.
(or)
14. Deadlock handling approaches.

Operating Systems – Sudheer Kumar Kasulanati

You might also like