0% found this document useful (0 votes)
18 views46 pages

unit5paper2AD

An operating system (OS) is essential system software that manages computer hardware resources and acts as an interface between users and hardware. It performs various functions including resource management, process management, memory management, and security, while also providing user interfaces and networking capabilities. Different types of operating systems exist, such as batch, time-sharing, and real-time systems, each serving specific needs and environments.

Uploaded by

Shameem Akhtar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views46 pages

unit5paper2AD

An operating system (OS) is essential system software that manages computer hardware resources and acts as an interface between users and hardware. It performs various functions including resource management, process management, memory management, and security, while also providing user interfaces and networking capabilities. Different types of operating systems exist, such as batch, time-sharing, and real-time systems, each serving specific needs and environments.

Uploaded by

Shameem Akhtar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit4

Operating System

Operating System lies in the category of system software. It basically manages all the
resources of the computer. An operating system acts as an interface between the
software and different parts of the computer or the computer hardware. The operating
system is designed in such a way that it can manage the overall resources and operations
of the computer.
Operating System is a fully integrated set of specialized programs that handle all the
operations of the computer. It controls and monitors the execution of all other programs
that reside in the computer, which also includes application programs and other system
software of the computer. Examples of Operating Systems are Windows, Linux, Mac OS,
etc.
An Operating System (OS) is a collection of software that manages computer hardware
resources and provides common services for computer programs. The operating system
is the most important type of system software in a computer system.

Operating System Definition and Function

In the Computer System (comprises of Hardware and software), Hardware can only
understand machine code (in the form of 0 and 1) which doesn't make any sense to a
naive user.

We need a system which can act as an intermediary and manage all the processes and
resources present in the system.

An Operating System can be defined as an interface between user and hardware. It


is responsible for the execution of all the processes, Resource Allocation, CPU
management, File Management and many other tasks.
The purpose of an operating system is to provide an environment in which a user can
execute programs in convenient and efficient manner.

Structure of a Computer System

A Computer System consists of:

ADVERTISEMENT
• Users (people who are using the computer)
• Application Programs (Compilers, Databases, Games, Video player, Browsers,
etc.)
• System Programs (Shells, Editors, Compilers, etc.)
• Operating System ( A special program which acts as an interface between user
and hardware )
• Hardware ( CPU, Disks, Memory, etc)

Functions of the Operating System


• Resource Management: The operating system manages and allocates memory,
CPU time, and other hardware resources among the various programs and
processes running on the computer.
• Process Management: The operating system is responsible for starting, stopping,
and managing processes and programs. It also controls the scheduling of processes
and allocates resources to them.
• Memory Management: The operating system manages the computer’s primary
memory and provides mechanisms for optimizing memory usage.
• Security: The operating system provides a secure environment for the user,
applications, and data by implementing security policies and mechanisms such as
access controls and encryption.
• Job Accounting: It keeps track of time and resources used by various jobs or
users.
• File Management: The operating system is responsible for organizing and
managing the file system, including the creation, deletion, and manipulation of files
and directories.
• Device Management: The operating system manages input/output devices such as
printers, keyboards, mice, and displays. It provides the necessary drivers and
interfaces to enable communication between the devices and the computer.
• Networking: The operating system provides networking capabilities such as
establishing and managing network connections, handling network protocols, and
sharing resources such as printers and files over a network.
• User Interface: The operating system provides a user interface that enables users
to interact with the computer system. This can be a Graphical User Interface (GUI),
a Command-Line Interface (CLI), or a combination of both.
• Backup and Recovery: The operating system provides mechanisms for backing up
data and recovering it in case of system failures, errors, or disasters.
• Virtualization: The operating system provides virtualization capabilities that allow
multiple operating systems or applications to run on a single physical machine. This
can enable efficient use of resources and flexibility in managing workloads.
• Performance Monitoring: The operating system provides tools for monitoring and
optimizing system performance, including identifying bottlenecks, optimizing
resource usage, and analyzing system logs and metrics.
• Time-Sharing: The operating system enables multiple users to share a computer
system and its resources simultaneously by providing time-sharing mechanisms that
allocate resources fairly and efficiently.
• System Calls: The operating system provides a set of system calls that enable
applications to interact with the operating system and access its resources. System
calls provide a standardized interface between applications and the operating
system, enabling portability and compatibility across different hardware and software
platforms.
• Error-detecting Aids: These contain methods that include the production of dumps,
traces, error messages, and other debugging and error-detecting methods.
For more, refer to Functions of Operating System.
Objectives of Operating Systems
Let us now see some of the objectives of the operating system, which are mentioned
below.
• Convenient to use: One of the objectives is to make the computer system more
convenient to use in an efficient manner.
• User Friendly: To make the computer system more interactive with a more
convenient interface for the users.
• Easy Access: To provide easy access to users for using resources by acting as an
intermediary between the hardware and its users.
• Management of Resources: For managing the resources of a computer in a better
and faster way.
• Controls and Monitoring: By keeping track of who is using which resource,
granting resource requests, and mediating conflicting requests from different
programs and users.
• Fair Sharing of Resources: Providing efficient and fair sharing of resources
between the users and programs.
Types of Operating Systems
• Batch Operating System: A Batch Operating System is a type of operating system
that does not interact with the computer directly. There is an operator who takes
similar jobs having the same requirements and groups them into batches.
• Time-sharing Operating System: Time-sharing Operating System is a type of
operating system that allows many users to share computer resources (maximum
utilization of the resources).
• Distributed Operating System: Distributed Operating System is a type of operating
system that manages a group of different computers and makes appear to be a
single computer. These operating systems are designed to operate on a network of
computers. They allow multiple users to access shared resources and communicate
with each other over the network. Examples include Microsoft Windows Server and
various distributions of Linux designed for servers.
• Network Operating System: Network Operating System is a type of operating
system that runs on a server and provides the capability to manage data, users,
groups, security, applications, and other networking functions.
• Real-time Operating System: Real-time Operating System is a type of operating
system that serves a real-time system and the time interval required to process and
respond to inputs is very small. These operating systems are designed to respond to
events in real time. They are used in applications that require quick and deterministic
responses, such as embedded systems, industrial control systems, and robotics.
• Multiprocessing Operating System: Multiprocessor Operating Systems are used
in operating systems to boost the performance of multiple CPUs within a single
computer system. Multiple CPUs are linked together so that a job can be divided and
executed more quickly.
• Single-User Operating Systems: Single-User Operating Systems are designed to
support a single user at a time. Examples include Microsoft Windows for personal
computers and Apple macOS.
• Multi-User Operating Systems: Multi-User Operating Systems are designed to
support multiple users simultaneously. Examples include Linux and Unix.
• Embedded Operating Systems: Embedded Operating Systems are designed to
run on devices with limited resources, such as smartphones, wearable devices, and
household appliances. Examples include Google’s Android and Apple’s iOS.
• Cluster Operating Systems: Cluster Operating Systems are designed to run on a
group of computers, or a cluster, to work together as a single system. They are used
for high-performance computing and for applications that require high availability and
reliability. Examples include Rocks Cluster Distribution and OpenMPI.
For more, refer to Types of Operating Systems.
How to Check the Operating System?
There are so many factors to be considered while choosing the best Operating System
for our use. These factors are mentioned below.

• Price Factor: Price is one of the factors to choose the correct Operating System as
there are some OS that is free, like Linux, but there is some more OS that is paid
like Windows and macOS.
• Accessibility Factor: Some Operating Systems are easy to use like macOS and
iOS, but some OS are a little bit complex to understand like Linux. So, you must
choose the Operating System in which you are more accessible.
• Compatibility factor: Some Operating Systems support very less applications
whereas some Operating Systems supports more application. You must choose the
OS, which supports the applications which are required by you.
• Security Factor: The security Factor is also a factor in choosing the correct OS, as
macOS provide some additional security while Windows has little fewer security
features.
Examples of Operating Systems
• Windows (GUI-based, PC)
• GNU/Linux (Personal, Workstations, ISP, File, and print server, Three-tier
client/Server)
• macOS (Macintosh), used for Apple’s personal computers and workstations
(MacBook, iMac).
• Android (Google’s Operating System for smartphones/tablets/smartwatches)
• iOS (Apple’s OS for iPhone, iPad, and iPod Touch)

Process Scheduling :
In computing, a process is the instance of a computer program that is being executed by
one or many threads. Scheduling is important in many different computer environments.
One of the most important areas of scheduling is which programs will work on the CPU.
This task is handled by the Operating System (OS) of the computer and there are many
different ways in which we can choose to configure programs.
What is Process Scheduling?
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process based on a particular
strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such
operating systems allow more than one process to be loaded into the executable memory
at a time and the loaded process shares the CPU using time multiplexing.

Process scheduler
Categories of Scheduling
Scheduling falls into one of two categories:

• Non-preemptive: In this case, a process’s resource cannot be taken before the


process has finished running. When a running process finishes and transitions to a
waiting state, resources are switched.
• Preemptive: In this case, the OS assigns resources to a process for a
predetermined period. The process switches from running state to ready state or
from waiting for state to ready state during resource allocation. This switching
happens because the CPU may give other processes priority and substitute the
currently active process for the higher priority process.
Types of Process Schedulers
There are three types of process schedulers:
1. Long Term or Job Scheduler

It brings the new process to the ‘Ready State’. It controls the Degree of Multi-programming,
i.e., the number of processes present in a ready state at any point in time. It is important
that the long-term scheduler make a careful selection of both I/O and CPU-bound
processes. I/O-bound tasks are which use much of their time in input and output
operations while CPU-bound processes are which spend their time on the CPU. The job
scheduler increases efficiency by maintaining a balance between the two. They operate
at a high level and are typically used in batch-processing systems.

2. Short-Term or CPU Scheduler

It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it doesn’t
load the process on running. Here is when all the scheduling algorithms are used. The
CPU scheduler is responsible for ensuring no starvation due to high burst time processes.

Short Term Scheduler


The dispatcher is responsible for loading the process selected by the Short-term
scheduler on the CPU (Ready to Running State) Context switching is done by the
dispatcher only. A dispatcher does the following:

• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.
3. Medium-Term Scheduler

It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements has
overcommitted available memory, requiring memory to be freed up. It is helpful in
maintaining a perfect balance between the I/O bound and the CPU bound. It reduces the
degree of multiprogramming.

Medium Term Scheduler


Some Other Schedulers
• I/O schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use various
algorithms to determine the order in which I/O operations are executed, such as
FCFS (First-Come, First-Served) or RR (Round Robin).
• Real-time schedulers: In real-time systems, real-time schedulers ensure that
critical tasks are completed within a specified time frame. They can prioritize and
schedule tasks using various algorithms such as EDF (Earliest Deadline First) or RM
(Rate Monotonic).
Comparison Among Scheduler
Long Term Scheduler Short term schedular Medium Term Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.
Speed lies in between
Generally, Speed is lesser Speed is the fastest
both short and long-term
than short term scheduler among all of them.
schedulers.

It gives less control over


It controls the degree of how much It reduces the degree of
multiprogramming multiprogramming is multiprogramming.
done.

It is barely present or
It is a minimal time- It is a component of
nonexistent in the time-
sharing system. systems for time sharing.
sharing system.

It can re-enter the process It can re-introduce the


It selects those processes
into memory, allowing for process into memory and
which are ready to
the continuation of execution can be
execute
execution. continued.

Two-State Process Model Short-Term


The terms “running” and “non-running” states are used to describe the two-state process
model.
1. Running: A newly created process joins the system in a running state when it is
created.
2. Not running: Processes that are not currently running are kept in a queue and await
execution. A pointer to a specific process is contained in each entry in the queue.
Linked lists are used to implement the queue system. This is how the dispatcher is
used. When a process is stopped, it is moved to the back of the waiting queue. The
process is discarded depending on whether it succeeded or failed. The dispatcher
then chooses a process to run from the queue in either scenario.
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes to
share a single CPU using this method. A multitasking operating system must include
context switching among its features.
The state of the currently running process is saved into the process control block when
the scheduler switches the CPU from executing one process to another. The state used
to set the computer, registers, etc. for the process that will run next is then loaded from
its own PCB. After that, the second can start processing.

Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes to
share a single CPU using this method. A multitasking operating system must include
context switching among its features.

• Program Counter
• Scheduling information
• The base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information

Inter Process Communication (IPC)

In general, Inter Process Communication is a type of mechanism usually provided by the


operating system (or OS). The main aim or goal of this mechanism is to provide
communications in between several processes. In short, the intercommunication allows
a process letting another process know that some event has occurred.

Let us now look at the general definition of inter-process communication, which will
explain the same thing that we have discussed above.
Definition

"Inter-process communication is used for exchanging useful information between


numerous threads in one or more processes (or programs)."

To understand inter process communication, you can consider the following given
diagram that illustrates the importance of inter-process communication:

Role of Synchronization in Inter Process Communication

It is one of the essential parts of inter process communication. Typically, this is provided
by interprocess communication control mechanisms, but sometimes it can also be
controlled by communication processes.

These are the following methods that used to provide the synchronization:

3. Mutual Exclusion
4. Semaphore
5. Barrier
6. Spinlock

Mutual Exclusion:-

It is generally required that only one process thread can enter the critical section at a time.
This also helps in synchronization and creates a stable state to avoid the race condition.

Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared resources
by several processes. Semaphore is further divided into two types which are as follows:

7. Binary Semaphore
8. Counting Semaphore

Barrier:-

A barrier typically not allows an individual process to proceed unless all the processes
does not reach it. It is used by many parallel languages, and collective routines impose
barriers.

Spinlock:-

Spinlock is a type of lock as its name implies. The processes are trying to acquire the
spinlock waits or stays in a loop while checking that the lock is available or not. It is known
as busy waiting because even though the process active, the process does not perform
any functional operation (or task).

Approaches to Interprocess Communication

We will now discuss some different approaches to inter-process communication which


are as follows:
These are a few different approaches for Inter- Process Communication:

9. Pipes
10. Shared Memory
11. Message Queue
12. Direct Communication
13. Indirect communication
14. Message Passing
15. FIFO

To understand them in more detail, we will discuss each of them individually.

Pipe:-

The pipe is a type of data channel that is unidirectional in nature. It means that the data
in this type of data channel can be moved in only a single direction at a time. Still, one
can use two-channel of this type, so that he can able to send and receive data in two
processes. Typically, it uses the standard methods for input and output. These pipes are
used in all types of POSIX systems and in different versions of window operating systems
as well.

Shared Memory:-

It can be referred to as a type of memory that can be used or accessed by multiple


processes simultaneously. It is primarily used so that the processes can communicate
with each other. Therefore the shared memory is used by almost all POSIX and Windows
operating systems as well.
Message Queue:-

In general, several different messages are allowed to read and write the data to the
message queue. In the message queue, the messages are stored or stay in the queue
unless their recipients retrieve them. In short, we can also say that the message queue
is very helpful in inter-process communication and used by all operating systems.

To understand the concept of Message queue and Shared memory in more detail, let's
take a look at its diagram given below:

Message Passing:-

It is a type of mechanism that allows processes to synchronize and communicate with


each other. However, by using the message passing, the processes can communicate
with each other without restoring the hared variables.
Usually, the inter-process communication mechanism provides two operations that are
as follows:

• send (message)
• received (message)

Note: The size of the message can be fixed or variable.

Direct Communication:-

In this type of communication process, usually, a link is created or established between


two communicating processes. However, in every pair of communicating processes, only
one link can exist.

Indirect Communication

Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links. These
shared links can be unidirectional or bi-directional.

FIFO:-

It is a type of general communication between two unrelated processes. It can also be


considered as full-duplex, which means that one process can communicate with another
process and vice versa.

Some other different approaches

• Socket:-
It acts as a type of endpoint for receiving or sending the data in a network. It is correct for
data sent between processes on the same computer or data sent between different
computers on the same network. Hence, it used by several types of operating systems.

• File:-

A file is a type of data record or a document stored on the disk and can be acquired on
demand by the file server. Another most important thing is that several processes can
access that file as required or needed.

• Signal:-

As its name implies, they are a type of signal used in inter process communication in a
minimal way. Typically, they are the massages of systems that are sent by one process
to another. Therefore, they are not used for sending data but for remote commands
between multiple processes.

Usually, they are not used to send the data but to remote commands in between several
processes.

Why we need interprocess communication?

There are numerous reasons to use inter-process communication for sharing the data.
Here are some of the most important reasons that are given below:

• It helps to speedup modularity


• Computational
• Privilege separation
• Convenience
• Helps operating system to communicate with each other and synchronize their
actions as well.

Note: IPC cannot be considered a solution to all problems but what is important is that it
does its job very well.
Advantages of IPC:

1. Enables processes to communicate with each other and share resources, leading to
increased efficiency and flexibility.
2. Facilitates coordination between multiple processes, leading to better overall system
performance.
3. Allows for the creation of distributed systems that can span multiple computers or
networks.
4. Can be used to implement various synchronization and communication protocols,
such as semaphores, pipes, and sockets.

Disadvantages of IPC:

5. Increases system complexity, making it harder to design, implement, and debug.


6. Can introduce security vulnerabilities, as processes may be able to access or modify
data belonging to other processes.
7. Requires careful management of system resources, such as memory and CPU time,
to ensure that IPC operations do not degrade overall system performance.
Can lead to data inconsistencies if multiple processes try to access or modify the
same data at the same time.
8. Overall, the advantages of IPC outweigh the disadvantages, as it is a necessary
mechanism for modern operating systems and enables processes to work together
and share resources in a flexible and efficient manner. However, care must be taken
to design and implement IPC systems carefully, in order to avoid potential security
vulnerabilities and performance issues.

Deadlock Detection And Recovery

Deadlock detection and recovery is the process of detecting and resolving deadlocks in
an operating system. A deadlock occurs when two or more processes are blocked,
waiting for each other to release the resources they need. This can lead to a system-
wide stall, where no process can make progress.

There are two main approaches to deadlock detection and recovery:

9. Prevention: The operating system takes steps to prevent deadlocks from occurring
by ensuring that the system is always in a safe state, where deadlocks cannot occur.
This is achieved through resource allocation algorithms such as the Banker’s
Algorithm.
10. Detection and Recovery: If deadlocks do occur, the operating system must detect
and resolve them. Deadlock detection algorithms, such as the Wait-For Graph, are
used to identify deadlocks, and recovery algorithms, such as the Rollback and Abort
algorithm, are used to resolve them. The recovery algorithm releases the resources
held by one or more processes, allowing the system to continue to make progress.
Difference Between Prevention and Detection/Recovery: Prevention aims to avoid
deadlocks altogether by carefully managing resource allocation, while detection and
recovery aim to identify and resolve deadlocks that have already occurred.
Deadlock detection and recovery is an important aspect of operating system design and
management, as it affects the stability and performance of the system. The choice of
deadlock detection and recovery approach depends on the specific requirements of the
system and the trade-offs between performance, complexity, and risk tolerance. The
operating system must balance these factors to ensure that deadlocks are effectively
detected and resolved.
In the previous post, we discussed Deadlock Prevention and Avoidance. In this post,
the Deadlock Detection and Recovery technique to handle deadlock is discussed.
Deadlock Detection :
1. If resources have a single instance –
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the
Resource Allocation Graph. The presence of a cycle in the graph is a sufficient
condition for deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a
cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.

2. If there are multiple instances of resources –


Detection of the cycle is necessary but not a sufficient condition for deadlock detection,
in this case, the system may or may not be in deadlock varies according to different
situations.
3. Wait-For Graph Algorithm –
The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect
deadlocks in a system where resources can have multiple instances. The algorithm
works by constructing a Wait-For Graph, which is a directed graph that represents the
dependencies between processes and resources.
Deadlock Recovery :
A traditional operating system such as Windows doesn’t deal with deadlock recovery as
it is a time and space-consuming process. Real-time operating systems use Deadlock
recovery.
11. Killing the process –
Killing all the processes involved in the deadlock. Killing process one by one. After
killing each process check for deadlock again and keep repeating the process till the
system recovers from deadlock. Killing all the processes one by one helps a system
to break circular wait conditions.
12. Resource Preemption –
Resources are preempted from the processes involved in the deadlock, and
preempted resources are allocated to other processes so that there is a possibility of
recovering the system from the deadlock. In this case, the system goes into
starvation.
13. Concurrency Control – Concurrency control mechanisms are used to prevent data
inconsistencies in systems with multiple concurrent processes. These mechanisms
ensure that concurrent processes do not access the same data at the same time,
which can lead to inconsistencies and errors. Deadlocks can occur in concurrent
systems when two or more processes are blocked, waiting for each other to release
the resources they need. This can result in a system-wide stall, where no process
can make progress. Concurrency control mechanisms can help prevent deadlocks
by managing access to shared resources and ensuring that concurrent processes do
not interfere with each other.

ADVANTAGES OR DISADVANTAGES:

Advantages of Deadlock Detection and Recovery in Operating Systems:

14. Improved System Stability: Deadlocks can cause system-wide stalls, and
detecting and resolving deadlocks can help to improve the stability of the system.
15. Better Resource Utilization: By detecting and resolving deadlocks, the operating
system can ensure that resources are efficiently utilized and that the system remains
responsive to user requests.
16. Better System Design: Deadlock detection and recovery algorithms can provide
insight into the behavior of the system and the relationships between processes and
resources, helping to inform and improve the design of the system.

Disadvantages of Deadlock Detection and Recovery in Operating Systems:

17. Performance Overhead: Deadlock detection and recovery algorithms can introduce
a significant overhead in terms of performance, as the system must regularly check
for deadlocks and take appropriate action to resolve them.
18. Complexity: Deadlock detection and recovery algorithms can be complex to
implement, especially if they use advanced techniques such as the Resource
Allocation Graph or Timestamping.
19. False Positives and Negatives: Deadlock detection algorithms are not perfect and
may produce false positives or negatives, indicating the presence of deadlocks when
they do not exist or failing to detect deadlocks that do exist.
20. Risk of Data Loss: In some cases, recovery algorithms may require rolling back the
state of one or more processes, leading to data loss or corruption.
Overall, the choice of deadlock detection and recovery approach depends on the
specific requirements of the system, the trade-offs between performance, complexity,
and accuracy, and the risk tolerance of the system. The operating system must balance
these factors to ensure that deadlocks are effectively detected and resolved.
Handling Deadlocks

Deadlock is a situation where a process or a set of processes is blocked, waiting for


some other resource that is held by some other waiting process. It is an undesirable state
of the system. The following are the four conditions that must hold simultaneously for a
deadlock to occur.
21. Mutual Exclusion – A resource can be used by only one process at a time. If another
process requests for that resource then the requesting process must be delayed until
the resource has been released.
22. Hold and wait – Some processes must be holding some resources in the non-
shareable mode and at the same time must be waiting to acquire some more
resources, which are currently held by other processes in the non-shareable mode.
23. No pre-emption – Resources granted to a process can be released back to the
system only as a result of voluntary action of that process after the process has
completed its task.
24. Circular wait – Deadlocked processes are involved in a circular chain such that each
process holds one or more resources being requested by the next process in the chain.
Methods of handling deadlocks: There are four approaches to dealing with deadlocks.
1. Deadlock Prevention
2. Deadlock avoidance (Banker's Algorithm)
3. Deadlock detection & recovery
4. Deadlock Ignorance (Ostrich Method)
These are explained below.
1. Deadlock Prevention: The strategy of deadlock prevention is to design the system
in such a way that the possibility of deadlock is excluded. The indirect methods prevent
the occurrence of one of three necessary conditions of deadlock i.e., mutual exclusion,
no pre-emption, and hold and wait. The direct method prevents the occurrence of
circular wait. Prevention techniques – Mutual exclusion – are supported by the OS.
Hold and Wait – the condition can be prevented by requiring that a process requests all
its required resources at one time and blocking the process until all of its requests can
be granted at the same time simultaneously. But this prevention does not yield good
results because:
• long waiting time required
• inefficient use of allocated resource
• A process may not know all the required resources in advance
No pre-emption – techniques for ‘no pre-emption are’

• If a process that is holding some resource, requests another resource that can not
be immediately allocated to it, all resources currently being held are released and if
necessary, request again together with the additional resource.
• If a process requests a resource that is currently held by another process, the OS
may pre-empt the second process and require it to release its resources. This works
only if both processes do not have the same priority.
Circular wait One way to ensure that this condition never holds is to impose a total
ordering of all resource types and to require that each process requests resources in
increasing order of enumeration, i.e., if a process has been allocated resources of type
R, then it may subsequently request only those resources of types following R in
ordering.
2. Deadlock Avoidance: The deadlock avoidance Algorithm works by proactively
looking for potential deadlock situations before they occur. It does this by tracking the
resource usage of each process and identifying conflicts that could potentially lead to a
deadlock. If a potential deadlock is identified, the algorithm will take steps to resolve the
conflict, such as rolling back one of the processes or pre-emptively allocating resources
to other processes. The Deadlock Avoidance Algorithm is designed to minimize the
chances of a deadlock occurring, although it cannot guarantee that a deadlock will
never occur. This approach allows the three necessary conditions of deadlock but
makes judicious choices to assure that the deadlock point is never reached. It allows
more concurrency than avoidance detection A decision is made dynamically whether
the current resource allocation request will, if granted, potentially lead to deadlock. It
requires knowledge of future process requests. Two techniques to avoid deadlock :
25. Process initiation denial
26. Resource allocation denial
Advantages of deadlock avoidance techniques:

• Not necessary to pre-empt and rollback processes


• Less restrictive than deadlock prevention
Disadvantages :

• Future resource requirements must be known in advance


• Processes can be blocked for long periods
• Exists a fixed number of resources for allocation
Banker’s Algorithm:
The Banker’s Algorithm is based on the concept of resource allocation graphs. A
resource allocation graph is a directed graph where each node represents a process,
and each edge represents a resource. The state of the system is represented by the
current allocation of resources between processes. For example, if the system has
three processes, each of which is using two resources, the resource allocation graph
would look like this:
Processes A, B, and C would be the nodes, and the resources they are using would be
the edges connecting them. The Banker’s Algorithm works by analyzing the state of the
system and determining if it is in a safe state or at risk of entering a deadlock.
To determine if a system is in a safe state, the Banker’s Algorithm uses two matrices:
the available matrix and the need matrix. The available matrix contains the amount of
each resource currently available. The need matrix contains the amount of each
resource required by each process.
The Banker’s Algorithm then checks to see if a process can be completed without
overloading the system. It does this by subtracting the amount of each resource used by
the process from the available matrix and adding it to the need matrix. If the result is in
a safe state, the process is allowed to proceed, otherwise, it is blocked until more
resources become available.
The Banker’s Algorithm is an effective way to prevent deadlocks in multiprogramming
systems. It is used in many operating systems, including Windows and Linux. In
addition, it is used in many other types of systems, such as manufacturing systems and
banking systems.
The Banker’s Algorithm is a powerful tool for resource allocation problems, but it is not
foolproof. It can be fooled by processes that consume more resources than they need,
or by processes that produce more resources than they need. Also, it can be fooled by
processes that consume resources in an unpredictable manner. To prevent these types
of problems, it is important to carefully monitor the system to ensure that it is in a safe
state.
3. Deadlock Detection: Deadlock detection is used by employing an algorithm that
tracks the circular waiting and kills one or more processes so that the deadlock is
removed. The system state is examined periodically to determine if a set of processes is
deadlocked. A deadlock is resolved by aborting and restarting a process, relinquishing
all the resources that the process held.

• This technique does not limit resource access or restrict process action.
• Requested resources are granted to processes whenever possible.
• It never delays the process initiation and facilitates online handling.
• The disadvantage is the inherent pre-emption losses.
4. Deadlock Ignorance: In the Deadlock ignorance method the OS acts like the
deadlock never occurs and completely ignores it even if the deadlock occurs. This
method only applies if the deadlock occurs very rarely. The algorithm is very simple. It
says ” if the deadlock occurs, simply reboot the system and act like the deadlock never
occurred.” That’s why the algorithm is called the Ostrich Algorithm.
Advantages:

• Ostrich Algorithm is relatively easy to implement and is effective in most cases.


• It helps in avoiding the deadlock situation by ignoring the presence of deadlocks.
Disadvantages:

• Ostrich Algorithm does not provide any information about the deadlock situation.
• It can lead to reduced performance of the system as the system may be blocked for
a long time.
• It can lead to a resource leak, as resources are not released when the system is
blocked due to deadlock.

Memory Management in Operating System

The term memory can be defined as a collection of data in a specific format. It is used to
store instructions and process data. The memory comprises a large array or group of
words or bytes, each with its own location. The primary purpose of a computer system is
to execute programs. These programs, along with the information they access, should be
in the main memory during execution. The CPU fetches instructions from memory
according to the value of the program counter.
To achieve a degree of multiprogramming and proper utilization of memory, memory
management is important. Many memory management methods exist, reflecting various
approaches, and the effectiveness of each algorithm depends on the situation.
Here, we will cover the following memory management topics:

• What is Main Memory?


• What is Memory Management?
• Why Memory Management is Required?
• Logical Address Space and Physical Address Space
• Static and Dynamic Loading
• Static and Dynamic Linking
• Swapping
• Contiguous Memory Allocation
o Memory Allocation
▪ First Fit
▪ Best Fit
▪ Worst Fit
o Fragmentation
▪ Internal Fragmentation
▪ External Fragmentation
o Paging
Before we start Memory management, let us know what is main memory is.
What is Main Memory?
The main memory is central to the operation of a Modern Computer. Main Memory is a
large array of words or bytes, ranging in size from hundreds of thousands to billions. Main
memory is a repository of rapidly available information shared by the CPU and I/O devices.
Main memory is the place where programs and information are kept when the processor
is effectively utilizing them. Main memory is associated with the processor, so moving
instructions and information into and out of the processor is extremely fast. Main memory
is also known as RAM (Random Access Memory). This memory is volatile. RAM loses its
data when a power interruption occurs.

Main Memory
What is Memory Management?
In a multiprogramming computer, the Operating System resides in a part of memory, and
the rest is used by multiple processes. The task of subdividing the memory among
different processes is called Memory Management. Memory management is a method in
the operating system to manage operations between main memory and disk during
process execution. The main aim of memory management is to achieve efficient utilization
of memory.
Why Memory Management is Required?
• Allocate and de-allocate memory before and after process execution.
• To keep track of used memory space by processes.
• To minimize fragmentation issues.
• To proper utilization of main memory.
• To maintain data integrity while executing of process.
Now we are discussing the concept of Logical Address Space and Physical Address
Space
Logical and Physical Address Space
• Logical Address Space: An address generated by the CPU is known as a “Logical
Address”. It is also known as a Virtual address. Logical address space can be
defined as the size of the process. A logical address can be changed.
• Physical Address Space: An address seen by the memory unit (i.e the one loaded
into the memory address register of the memory) is commonly known as a “Physical
Address”. A Physical address is also known as a Real address. The set of all
physical addresses corresponding to these logical addresses is known as Physical
address space. A physical address is computed by MMU. The run-time mapping
from virtual to physical addresses is done by a hardware device Memory
Management Unit(MMU). The physical address always remains constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There are two different
types of loading :

• Static Loading: Static Loading is basically loading the entire program into a fixed
address. It requires more memory space.
• Dynamic Loading: The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of
physical memory. To gain proper memory utilization, dynamic loading is used. In
dynamic loading, a routine is not loaded until it is called. All routines are residing on
disk in a relocatable load format. One of the advantages of dynamic loading is that
the unused routine is never loaded. This loading is useful when a large amount of
code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more
object files generated by a compiler and combines them into a single executable file.
• Static Linking: In static linking, the linker combines all necessary program modules
into a single executable program. So there is no runtime dependency. Some
operating systems support only static linking, in which system language libraries are
treated like any other object module.
• Dynamic Linking: The basic concept of dynamic linking is similar to dynamic
loading. In dynamic linking, “Stub” is included for each appropriate library routine
reference. A stub is a small piece of code. When the stub is executed, it checks
whether the needed routine is already in memory or not. If not available then the
program loads the routine into memory.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of
swapping a process temporarily into a secondary memory from the main memory, which
is fast compared to secondary memory. A swapping allows more processes to be run and
can be fit into memory at one time. The main part of swapping is transferred time and the
total time is directly proportional to the amount of memory swapped. Swapping is also
known as roll-out, or roll because if a higher priority process arrives and wants service,
the memory manager can swap out the lower priority process and then load and execute
the higher priority process. After finishing higher priority work, the lower priority process
swapped back in memory and continued to the execution process.

swapping in memory
management
Memory Management with Monoprogramming (Without Swapping)

This is the simplest memory management approach the memory is divided into two
sections:

• One part of the operating system


• The second part of the user program
Fence Register

operating system user program

• In this approach, the operating system keeps track of the first and last location
available for the allocation of the user program
• The operating system is loaded either at the bottom or at top
• Interrupt vectors are often loaded in low memory therefore, it makes sense to load
the operating system in low memory
• Sharing of data and code does not make much sense in a single process
environment
• The Operating system can be protected from user programs with the help of a fence
register.

Advantages of Memory Management

• It is a simple management approach

Disadvantages of Memory Management

• It does not support multiprogramming


• Memory is wasted

Multiprogramming with Fixed Partitions (Without Swapping)

• A memory partition scheme with a fixed number of partitions was introduced to


support multiprogramming. this scheme is based on contiguous allocation
• Each partition is a block of contiguous memory
• Memory is partitioned into a fixed number of partitions.
• Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the region is reserved for
updating the system the remaining four partitions are for the user program.
Fixed Size Partitioning

Operating System

p1

p2

p3

p4

Partition Table
Once partitions are defined operating system keeps track of the status of memory
partitions it is done through a data structure called a partition table.
Sample Partition Table

Starting Address of Partition Size of Partition Status

0k 200k allocated

200k 100k free

300k 150k free

450k 250k allocated

Logical vs Physical Address

An address generated by the CPU is commonly referred to as a logical address. the


address seen by the memory unit is known as the physical address. The logical address
can be mapped to a physical address by hardware with the help of a base register this is
known as dynamic relocation of memory references.
Contiguous Memory Allocation
The main memory should accommodate both the operating system and the different client
processes. Therefore, the allocation of memory becomes an important task in the
operating system. The memory is usually divided into two partitions: one for the resident
operating system and one for the user processes. We normally need several user
processes to reside in memory simultaneously. Therefore, we need to consider how to
allocate available memory to the processes that are in the input queue waiting to be
brought into memory. In adjacent memory allotment, each process is contained in a single
contiguous segment of memory.

Contiguous Memory
Allocation
Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner.
One of the simplest methods for allocating memory is to divide memory into several fixed-
sized partitions and each partition contains exactly one process. Thus, the degree of
multiprogramming is obtained by the number of partitions.

• Multiple partition allocation: In this method, a process is selected from the input
queue and loaded into the free partition. When the process terminates, the partition
becomes available for other processes.
• Fixed partition allocation: In this method, the operating system maintains a table
that indicates which parts of memory are available and which are occupied by
processes. Initially, all memory is available for user processes and is considered one
large block of available memory. This available memory is known as a “Hole”. When
the process arrives and needs memory, we search for a hole that is large enough to
store this process. If the requirement is fulfilled then we allocate memory to process,
otherwise keeping the rest available to satisfy future requests. While allocating a
memory sometimes dynamic storage allocation problems occur, which concerns
how to satisfy a request of size n from a list of free holes. There are some solutions
to this problem:

First Fit

In the First Fit, the first available free hole fulfil the requirement of the process allocated.

First Fit
Here, in this diagram, a 40 KB memory block is the first available free hole that can store
process A (size of 25 KB), because the first two blocks did not have sufficient memory
space.

Best Fit

In the Best Fit, allocate the smallest hole that is big enough to process requirements. For
this, we search the entire list, unless the list is ordered by size.
Best Fit
Here in this example, first, we traverse the complete list and find the last hole 25KB is the
best suitable hole for Process A(size 25KB). In this method, memory utilization is
maximum as compared to other memory allocation techniques.

Worst Fit

In the Worst Fit, allocate the largest available hole to process. This method produces the
largest leftover hole.

Worst Fit
Here in this example, Process A (Size 25 KB) is allocated to the largest available memory
block which is 60KB. Inefficient memory utilization is a major issue in the worst fit.
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution
from memory, it creates a small free hole. These holes can not be assigned to new
processes because holes are not combined or do not fulfill the memory requirement of
the process. To achieve a degree of multiprogramming, we must reduce the waste of
memory or fragmentation problems. In the operating systems two types of fragmentation:
27. Internal fragmentation: Internal fragmentation occurs when memory blocks are
allocated to the process more than their requested size. Due to this some unused
space is left over and creating an internal fragmentation problem.Example: Suppose
there is a fixed partitioning used for memory allocation and the different sizes of
blocks 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size 2MB
comes and demands a block of memory. It gets a memory block of 3MB but 1MB
block of memory is a waste, and it can not be allocated to other processes too. This
is called internal fragmentation.
28. External fragmentation: In External Fragmentation, we have a free memory block,
but we can not assign it to a process because blocks are not contiguous. Example:
Suppose (consider the above example) three processes p1, p2, and p3 come with
sizes 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size 3MB,
6MB, and 7MB allocated respectively. After allocating the process p1 process and
the p2 process left 1MB and 2MB. Suppose a new process p4 comes and demands
a 3MB block of memory, which is available, but we can not assign it because free
memory space is not contiguous. This is called external fragmentation.
Both the first-fit and best-fit systems for memory allocation are affected by external
fragmentation. To overcome the external fragmentation problem Compaction is used. In
the compaction technique, all free memory space combines and makes one large block.
So, this space can be used by other processes effectively.
Another possible solution to the external fragmentation is to allow the logical address
space of the processes to be noncontiguous, thus permitting a process to be allocated
physical memory wherever the latter is available.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory. This scheme permits the physical address space of a
process to be non-contiguous.

• Logical Address or Virtual Address (represented in bits): An address generated


by the CPU.
• Logical Address Space or Virtual Address Space (represented in words or
bytes): The set of all logical addresses generated by a program.
• Physical Address (represented in bits): An address actually available on a
memory unit.
• Physical Address Space (represented in words or bytes): The set of all physical
addresses corresponding to the logical addresses.
Example:
• If Logical Address = 31 bits, then Logical Address Space = 2 31 words = 2 G words (1
G = 230)
• If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address =
log2 227 = 27 bits
• If Physical Address = 22 bits, then Physical Address Space = 2 22 words = 4 M words
(1 M = 220)
• If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address =
log2 224 = 24 bits
The mapping from virtual to physical address is done by the memory management unit
(MMU) which is a hardware device and this mapping is known as the paging technique.

• The Physical Address Space is conceptually divided into several fixed-size blocks,
called frames.
• The Logical Address Space is also split into fixed-size blocks, called pages.
• Page Size = Frame Size
Let us consider an example:

• Physical Address = 12 bits, then Physical Address Space = 4 K words


• Logical Address = 13 bits, then Logical Address Space = 8 K words
• Page size = frame size = 1 K words (assumption)

Paging
The address generated by the CPU is divided into:

• Page Number(p): Number of bits required to represent the pages in Logical


Address Space or Page number
• Page Offset(d): Number of bits required to represent a particular word in a page or
page size of Logical Address Space or word number of a page or page offset.
Physical Address is divided into:
• Frame Number(f): Number of bits required to represent the frame of Physical
Address Space or Frame number frame
• Frame Offset(d): Number of bits required to represent a particular word in a frame
or frame size of Physical Address Space or word number of a frame or frame offset.
The hardware implementation of the page table can be done by using dedicated registers.
But the usage of the register for the page table is satisfactory only if the page table is
small. If the page table contains a large number of entries then we can use
TLB(translation Look-aside buffer), a special, small, fast look-up hardware cache.

• The TLB is an associative, high-speed memory.


• Each entry in TLB consists of two parts: a tag and a value.
• When this memory is used, then an item is compared with all tags simultaneously. If
the item is found, then the corresponding value is returned.

Page Map Table


Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table)
+ m(for particular page in page table)
TLB Hit and Miss

Multiprogramming in Operating System

As the name suggests, more than one programs can be active at the same time. Before
the concept of Multiprogramming, there were single tasking operating systems like MS
DOS that used to allow only one program to be loaded at a time and run. These systems
were not efficient as CPU was not used efficiently. For example, in a single tasking system
if the current program waits for some input/output to finish, the CPU is not used. The idea
of multiprogramming is to assign CPUs to other processes while the current process
might not be finished. This has the below advantages.
1) User get the feeling that he/she can run multiple applications on a single CPU even if
the CPU is running one process at a time.
2) CPU is utilized better
All modern operating systems like MS Windows, Linux, etc are multiprogramming
operating systems,
Features of Multiprogramming
29. Need Single CPU for implementation.
30. Context switch between process.
31. Switching happens when current process undergoes waiting state.
32. CPU idle time is reduced.
33. High resource utilization.
34. High Performance.
Disadvantages of Multiprogramming
35. Prior knowledge of scheduling algorithms (An algorithm that decides which next
process will get hold of the CPU) is required.
36. If it has a large number of jobs, then long-term jobs will have to require a long wait.
37. Memory management is needed in the operating system because all types of tasks
are stored in the main memory.
38. Using multiprogramming up to a larger extent can cause a heat-up issue.
Scheduling Algorithms are of two types.
39. Preemptive Scheduling algorithm: In the preemptive scheduling algorithm if more
than one process wants to enter into the critical section then it will be allowed and it
can enter into the critical section without any interruption only if no other progress is
in the critical section.
40. Non-Preemptive scheduling algorithm: If a process gets a critical section then it
will not leave the critical section until or unless it works gets done.
How do Multiprogramming Operating Systems Work?
In multiprogramming system, multiple programs are to be stored in memory and each
program has to be given a specific portion of memory which is known as process. The
operating system handles all these process and their states. Before the process
undergoes execution, the operating system selects a ready process by checking which
one process should udergo execution. When the chosen process undergoes CPU
execution, it might be possible that in between process need any input/output operation
at that time process goes out of main memory for I/O operation and temporarily stored in
secondary storage and CPU switches to next ready process. And when the process which
undergoes for I/O operation comes again after completing the work, then CPU switches
to this process. This switching is happening so fast and repeatedly that creates an illusion
of simultaneous execution.

I/O scheduling in Operating Systems

Last Updated : 28 Jun, 2023

You can manage connectivity in your active I/O configurations through I/O operations,
which offers a centralized point of control. In addition to allowing you to view and
change the paths between a processor and an input/output device, which may involve
using dynamic switching, it actively participates in identifying unusual I/O conditions.
Before understanding the I/O scheduling, It’s important to get an overview of I/O
operations.

How are I/O operations performed?

Operating System has a certain portion of code that is dedicated to managing


Input/Output in order to improve the reliability and the performance of the system. A
computer system contains CPUs and more than one device controller connected to a
common bus channel, generally referred to as the device driver. These device drivers
provide an interface to I/O devices for communicating with the system hardware
promoting ease of communication and providing access to shared memory.

I/O Requests in operating systems


I/O Requests are managed by Device Drivers in collaboration with some system
programs inside the I/O device. The requests are served by OS using three simple
segments :
41. I/O Traffic Controller: Keeps track of the status of all devices, control units, and
communication channels.
42. I/O scheduler: Executes the policies used by OS to allocate and access the device,
control units, and communication channels.
43. I/O device handler: Serves the device interrupts and heads the transfer of data.
I/O Scheduling in operating systems
Scheduling is used for efficient usage of computer resources avoiding deadlock and
serving all processes waiting in the queue.To know more about CPU Scheduling refer to
CPU Scheduling in Operating Systems.
I/O Traffic Controller has 3 main tasks:

• The primary task is to check if there’s at least one path available.


• If there exists more than one path, it must decide which one to select.
• If all paths are occupied, its task is to analyze which path will be available at the
earliest.
Scheduling in computing is the process of allocating resources to carry out tasks.
Processors, network connections, or expansion cards are examples of the resources.
The tasks could be processes, threads, or data flows.
A process referred to as a scheduler is responsible for scheduling. Schedulers are
frequently made to keep all computer resources active (as in load balancing), efficiently
divide up system resources among multiple users, or reach a desired level of service.
I/O Scheduler functions similarly to Process scheduler, it allocates the devices, control
units, and communication channels. However, under a heavy load of I/O requests,
Scheduler must decide what request should be served first and for that we multiple
queues to be managed by OS. The major difference between a Process scheduler<
and an I/O scheduler is that I/O requests are not preempted: Once the channel
program has started, it’s allowed to continue to completion. Although it is feasible
because programs are relatively short (50 to 100 ms). Some modern OS allows I/O
Scheduler to serve higher priority requests. In simpler words, If an I/O request has
higher priority then they are served before other I/O requests with lower priority. The I/O
scheduler works in coordination with the I/O traffic controller to keep track of which path
is being served for the current I/O request. I/O Device Handler manages the I/O
interrupts (if any) and scheduling algorithms.
A few I/O handling algorithms are :
44. FCFS [First come first server].
45. SSTF [Shortest seek time first].
46. SCAN
47. Look
o N-Step Scan
o C-SCAN
o C-LOOK
Every scheduling algorithm aims to minimize arm movement, mean response time, and
variance in response time. An overview of all I/O scheduling algorithms is described
below :
48. First Come First Serve [FCFS] It is one of the simplest device-scheduling
algorithms since it is easy to program and essentially fair to users (I/O devices). The
only barrier could be the high seek time, so any other algorithm that can surpass the
minimum seek time is suitable for scheduling.
49. Shortest Seek Time First [SSTF]: It uses the same ideology as the Shortest Job
First in process scheduling, where the shortest processes are served first and longer
processes have to wait for their turn. Comparing the SJF concept in I/O scheduling,
the request with the track closest to the one being served (The one with the shortest
distance to travel on disk) is next to be satisfied. The main advantage over FCFS is
that it minimizes overall seek time. It favors easy-to-reach requests and postpones
traveling to those that are out of the way.
50. SCAN Algorithm: Scan uses a status flag that tells the direction of the arm, it tells
whether the arm is moving toward the center of the disk or to the other side. This
algorithm moves the arm from the end of the disk to the center track servicing every
request in its way. When it reaches the innermost track, it reverses the direction and
moves towards outer tracks on the disk, again servicing every request in its path.
51. LOOK [Elevator Algorithm]: It’s a variation of the SCAN algorithm, here arm
doesn’t necessarily go all the way to either side on disk unless there are requests
pending. It looks ahead to a request before servicing it. A big question that arises is
“Why should we use LOOK over SCAN?”. The major advantage of LOOK over
SCAN is that it discards the indefinite delay of I/O requests.

Other variations of SCAN

52. N-Step Scan: It holds all the pending requests until the arm starts its way back. New
requests are grouped for the next cycle of rotation.
53. C-SCAN [Circular SCAN] : It provides a uniform wait time as the arm serves
requests on its way during the inward cycle. To know more, refer Difference between
SCAN and C-SCAN.
54. C-LOOK [Optimized version of C-SCAN] : Arm doesn’t necessarily return to the
lowest-numbered track, it returns from the lowest request to be served. It optimized
the C-SCAN as the arm doesn’t move to the end of the disk if not required. To know
more, refer to the Difference between C-LOOK and C-SCAN.

File management :
File management in Operating Systems is a fundamental and crucial component. The
operating system manages computer system files. Operating systems control all files
with various extensions.

File management in operating system is formally defined as manipulating files in a


computer system, which includes creating, modifying, and deleting files. Therefore, file
management is one of the simple but crucial features offered by the operating system.
The operating system’s file management function entails software that handles or
maintains files (binary, text, PDF, docs, audio, video, etc.) included in computer
software.

The operating system’s file system can manage single and group files in a computer
system. The operating system’s file management manages all of the files on the
computer system with different extensions(such as .exe, .pdf, .txt, .docx, etc.).

The features of File Management in an Operating System are


55. Providing security to application software and system.
56. Memory management
57. Disk management.
58. I/O operations.
59. File management, etc
So, file management is one of the basic but important features of the operating system.

We can also use the files system in operating system to get details of any files present
on our system. The details can be:

• locations of the file (the logical locations where the files are stored in the
computer system)
• the owners of the file (who can write or read the particular file)
• when were the files created (modification time and time of file creation)
• a type of file (format of the files, for example, docs, pdfs, text, etc).
• State of completion of files etc.
File management is the operating system, or to make the operating system understand
a file, the file must be in a structure or format or predefined structure. There are three
types of file structures present in the operating systems:

60. Text file: The text file is a non-executable file containing a sequence of symbols,
numbers, and letters organized in the form of lines.
61. Source file: A source file is an executable file that contains a series of processes
and functions. In simpler terms, we can say that a source file is a file that
contains the instructions of a program.
62. Object file: An object file is a file that contains object code in the form of
assembling language code or machine language code. In simpler terms, we can
say that an object file contains program instructions in the form of a series of
organized bytes in the form of blocks.
What is a Distributed Operating System?

A Distributed Operating System refers to a model in which applications run on multiple


interconnected computers, offering enhanced communication and integration capabilities
compared to a network operating system.

What is a Distributed Operating System?


In a Distributed Operating System, multiple CPUs are utilized, but for end-users, it
appears as a typical centralized operating system. It enables the sharing of various
resources such as CPUs, disks, network interfaces, nodes, and computers across
different sites, thereby expanding the available data within the entire system.
Effective communication channels like high-speed buses and telephone lines connect all
processors, each equipped with its own local memory and other neighboring processors.
Due to its characteristics, a distributed operating system is classified as a loosely
coupled system. It encompasses multiple computers, nodes, and sites, all
interconnected through LAN/WAN lines. The ability of a Distributed OS to share
processing resources and I/O files while providing users with a virtual machine
abstraction is an important feature.
The diagram below illustrates the structure of a distributed operating system:
Types of Distributed Operating System
There are many types of Distributed Operating System, some of them are as follows:

1. Client-Server Systems

In a client-server system within a distributed operating system, clients request services or


resources from servers over a network. Clients initiate communication, send requests,
and handle user interfaces, while servers listen for requests, perform tasks, and manage
resources.

• This model allows for scalable resource utilization, efficient sharing, modular
development, centralized control, and fault tolerance.
• It facilitates collaboration between distributed entities, promoting the development of
reliable, scalable, and interoperable distributed systems.

2. Peer-to-Peer(P2P) Systems

In peer-to-peer (P2P) systems, interconnected nodes directly communicate and


collaborate without centralized control. Each node can act as both a client and a server,
sharing resources and services with other nodes. P2P systems enable decentralized
resource sharing, self-organization, and fault tolerance.
• They support efficient collaboration, scalability, and resilience to failures without
relying on central servers.
• This model facilitates distributed data sharing, content distribution, and computing
tasks, making it suitable for applications like file sharing, content delivery, and
blockchain networks.

3. Middleware

Middleware acts as a bridge between different software applications or components,


enabling communication and interaction across distributed systems. It abstracts
complexities of network communication, providing services like message passing, remote
procedure calls (RPC), and object management.

• Middleware facilitates interoperability, scalability, and fault tolerance by decoupling


application logic from underlying infrastructure.
• It supports diverse communication protocols and data formats, enabling seamless
integration between heterogeneous systems.
• Middleware simplifies distributed system development, promotes modularity, and
enhances system flexibility, enabling efficient resource utilization and improved
system reliability.

4. Three-Tier

In a distributed operating system, the three-tier architecture divides tasks into


presentation, logic, and data layers. The presentation tier, comprising client machines or
devices, handles user interaction. The logic tier, distributed across multiple nodes or
servers, executes processing logic and coordinates system functions.

• The data tier manages storage and retrieval operations, often employing distributed
databases or file systems across multiple nodes.
• This modular approach enables scalability, fault tolerance, and efficient resource
utilization, making it ideal for distributed computing environments.

5. N-Tier

In an N-tier architecture, applications are structured into multiple tiers or layers beyond
the traditional three-tier model. Each tier performs specific functions, such as
presentation, logic, data processing, and storage, with the flexibility to add more tiers as
needed. In a distributed operating system, this architecture enables complex
applications to be divided into modular components distributed across multiple nodes or
servers.

• Each tier can scale independently, promoting efficient resource utilization, fault
tolerance, and maintainability.
• N-tier architectures facilitate distributed computing by allowing components to run on
separate nodes or servers, improving performance and scalability.
• This approach is commonly used in large-scale enterprise systems, web
applications, and distributed systems requiring high availability and scalability.

You might also like