OS Unit 3 Notes
OS Unit 3 Notes
Deadlock
• The critical section problem is used to design a protocol followed by a group of processes, so that when one
process has entered its critical section, no other process is allowed to execute in its critical section.
• When two processes access and manipulate the shared resource concurrently, and the resulting execution
outcome depends on the order in which processes access the resource; this is called a race condition.
• Race conditions lead to inconsistent states of data.
• Therefore, we need a synchronization protocol that allows processes to cooperate while manipulating shared
resources, which essentially is the critical section problem.
• Critical Section Problem → Semaphore
Solutions to the critical section problem
• Any solution to the critical section problem must satisfy the following requirements:
• Mutual exclusion: When one process is executing in its critical section, no other process is allowed to
execute in its critical section.
• Progress: When no process is executing in its critical section, and there exists a process that wishes to enter
its critical section, it should not have to wait indefinitely to enter it.
• Bounded waiting: There must be a bound on the number of times a process is allowed to execute in its
critical section, after another process has requested to enter its critical section and before that request is
accepted.
• The critical section contains shared variables or resources which are needed to be synchronized to maintain
the consistency of data variables.
Process Synchronization is the task of coordinating the execution of processes in a way that no two processes can
have access to the same shared data and resources.
• It is specially needed in a multi-process system when multiple processes are running together, and more than
one processes try to gain access to the same shared resource or data at the same time.
• This can lead to the inconsistency of shared data.
• So the change made by one process not necessarily reflected when other processes accessed the same shared
data.
• To avoid this type of inconsistency of data, the processes need to be synchronized with each other.
Mutual Exclusion
• The shared resources are acquired and used in a mutually exclusive manner, that is, by at most one process
at a time.
• A deadlock is a situation where a group of processes are permanently blocked as a result of each process
having acquired a subset of the resources needed for the completion and waiting for release of the
remaining resources held by others in the same group –
• Thus making it impossible for any of the processes to proceed.
• Deadlock can occur in concurrent environment as a result of uncontrolled granting of the system resources
to requesting processes.
Deadlock
• Traffic only in one direction.
• Each section of a bridge can be viewed as a resource.
• If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback).
• Several cars may have to be backed up if a deadlock occurs.
• Starvation is possible.
• Starvation the problem that occurs when high priority processes keep executing and low priority
processes get blocked for indefinite time
• System Deadlock
• A process must request a resource before using it, and must release the resource after finishing with it.
• A set of processes is in a deadlock state when every process in the set is waiting for a resource that can only
be released by another process in the set.
Necessary Conditions for Deadlock
• Mutual Exclusion: the shared resources are acquired and used in a mutually exclusive manner, that is, by
at most one process at a time.
• Hold and wait: Each process continuous to hold resources already allocated to it while waiting to acquire
other resources.
• NO Preemption: Resources granted to a process can be released back to the system only as a result of the
voluntary action by the process; the system cannot forcefully revoke them.
• Circular Chain: Deadlock processes are involved in a circular chain such that each process holds one or
more resources being requested by the next process in the chain
HOW TO HANDLE DEADLOCKS
There are three methods:
• 1. Prevention : Prevent any one of the 4 conditions from happening.
• 2. Avoidance Allow all deadlock conditions, but calculate cycles about to happen and stop
dangerous operations..
• 3. Allow deadlock to happen. This requires using both:
ME with Semaphore
• Program/module smutex
• Var mutex : semaphore; {binary}
{Parent process}
Begin (smutex)
Mutex := 1 {free}
Initialize p1, p2, p3
End{smutex}
Device management in an operating system means controlling the Input/Output devices like disk, microphone,
keyboard, printer, magnetic tape, USB ports…
• Four main functions involved in device management?
• open and close device drivers.
• communicate with device drivers.
• control and monitor device drivers.
• write and install device drivers.
• Device drivers are software programs that enable the operating system to communicate with the hardware
devices attached to the computer system.
• A device driver acts as a translator between the operating system and the hardware device, providing a
standard interface for the operating system to interact with the device.
• The function of a device driver is to facilitate communication between the operating system and the device
hardware, and to enable the operating system to control and manage the device.
Device Drivers
• There are different types of device drivers that are designed to handle different types of hardware devices.
Here are three common types of device drivers −
• Character device driver − Devices that send data character by character are controlled by character device
drivers. Keyboards, mice, printers, and terminals are some of these gadgets.
• Character device drivers function by buffering the data that is received from the hardware device until the
operating system is prepared to process it.
• Block device driver − Hard disc drives and solid-state drives are examples of devices that transfer data in
fixed-size blocks and are managed by block device drivers.
• Network device driver − Network device drivers are used to manage network interface devices such as
Ethernet cards and Wi-Fi adapters.
• Network device drivers provide the operating system with the ability to communicate with other devices on
a network.
• SCAN (Scanning) is a disk scheduling algorithm used in operating systems to manage disk I/O
operations.
• The SCAN algorithm moves the disk head in a single direction and services all requests until it reaches the
end of the disk, and then it reverses direction and services all the remaining requests.
• In SCAN, the disk head starts at one end of the disk, moves toward the other end, and services all requests
that lie in its path.
• Once the disk head reaches the other end, it reverses direction and services all requests that it missed on the
way. This continues until all requests have been serviced.
• The C-SCAN (Circular SCAN) algorithm operates similarly to the SCAN algorithm, but it does not
reverse direction at the end of the disk.
• Instead, the disk head wraps around to the other end of the disk and continues to service requests.
• This algorithm can reduce the total distance the disk head must travel, improving disk access time.
• However, this algorithm can lead to long wait times for requests that are made near the end of the disk, as
they must wait for the disk head to wrap around to the other end of the disk before they can be serviced.
• The C-SCAN algorithm is often used in modern operating systems due to its ability to reduce disk access
time and improve overall system performance.
• The LOOK algorithm is similar to the SCAN algorithm but stops servicing requests as soon as it reaches
the end of the disk.
• This algorithm can reduce the total distance the disk head must travel, improving disk access time.
• However, this algorithm can lead to long wait times for requests that are made near the end of the disk, as
they must wait for the disk head to wrap around to the other end of the disk before they can be serviced.
• The LOOK algorithm is often used in modern operating systems due to its ability to reduce disk access time
and improve overall system performance.
•
• C-LOOK is similar to the C-SCAN disk scheduling algorithm.
• In this algorithm, goes only to the last request to be serviced in front of the head in spite of the disc arm
going to the end, and then from there it goes to the other end’s last request.
• Thus, it also prevents the extra delay which might occur due to unnecessary traversal to the end of the disk.
Rotational position optimization disk scheduling algorithms utilize seek distance versus rotational distance
information implemented as rpo tables (arrays) which are stored in flash memory within each disk drive.
Rotational position optimization disk scheduling algorithms utilize seek distance versus rotational distance
information implemented as rpo tables (arrays) which are stored in flashmemory within each disk drive. We
consider a novel representation scheme for this information reducing the required flashmemory by a factor
of more than thirty thereby reducing the manufacturing cost per drive. We present simulation results
showing the throughput for conservative and aggressive versions of the scheme as well as comparative
results with the standard production drives not using these results.
Rotational optimization is a technique used in operating systems (OS) to optimize disk I/O performance.
Rotational optimization can improve the performance in case there are number of requests to small pieces of
data randomly distributed throughout disk cylinders.
But the processes that access data sequentially tend to access entire tracks of data and thus do not benefit
much from rotational optimization.
Rotational Latency: Time for data to rotate from current position to read-write head.
Buffering is a component of the main memory (RAM) that temporarily holds data while it is being sent between
two devices. Buffering aids in matching the data stream's transmitter and receiver speeds. If the sender's transfer
rate is slower than the receiver's, a buffer in the receiver's main memory is created which stores the bytes received
from the sender. When all of the bytes of data have arrived, the receiver has data to work with.
Buffering is also useful when the data transfer sizes of the sender and receiver differ. Buffers are used in computer
networking to fragment and reassemble data. On the sender side, the large amount of data is divided into little
packets and sent over the network. On the receiver side, a buffer is generated that gathers all the data packets and
collects them to make a large amount of data set again.
There are various features of buffering in the OS. Some features of the buffering are as follows:
1. It is a method for handling overlapping Input/Output and single-job processing. When the data is read and
the processor is about to begin processing it, the input devices are ordered to begin the next input
immediately.
2. It also supports the copy semantic process, which implies that the data version in the buffer and the data
version at the time of the system call must be the same.
3. It resolves the issue of the speed differential between the two devices used to transfer data.
What is Caching?
The cache is the processor-implemented memory that holds the original data copy. The main concept behind the
caching memory is that the recently accessed disk blocks should be saved in the cache memory so that if any user
again requires access to the same disk blocks, it may be handled locally via the cache memory eliminating the
network traffic.
Cache memory size is limited because it only stores recently used data in the memory. You may also see changes in
the original file when you change the cache file. If you need the data that is not in the cache memory, it is copied
from the source to the cached memory and made available to the user the next time the data is requested.
The cache data may also be stored on disk instead of RAM, which is more reliable. If the computer system is
destroyed, the cached data remains on the disk, but data will be lost in volatile memory, such as RAM. One main
benefit of storing cached data in the main memory is that it may be accessed quickly.
Example: Cache is utilized in systems to increase speed access to usually used data.
Advantages and Disadvantages of Caching
There are various advantages and disadvantages of caching in the operating system. Some advantages and
disadvantages of caching are as follows:
Advantages
1. It is faster than the system's main and second memory.
2. It increases the CPU's performance by storing all the information and instructions that are regularly used by
the CPU.
3. Cache memory has a faster data access time than RAM.
4. The CPU works more quickly as data access speeds increases.
Disadvantages
1. It is quite expensive than the other memory.
2. Its storage capacity is limited.
3. It holds the data temporarily.
4. If the system is turned off, the stored data in the memory is destroyed.
The most basic difference between buffering and caching is that buffering is used to sync the speed of data transmission
between senders and receive, while caching is used to increase the speed of data processing by the CPU.
Definition It is a component of the main memory (RAM) that temporarily holds It is the processor-implemented memory that holds the
data while it is being sent between two devices. original data copy.
Basic It matches the speed between the data stream's sender and receiver. It increases the access speed of the repeatedly used data.
Storage It stores the original copy of the data. It stores a copy of the original data.
Location A buffer is a part of the main memory (RAM). It is implemented on the processor; however, it can also be
implemented on RAM and storage.
Policy It may be implemented as a First in, First out policy It may be implemented as a Least Recently Used policy.
Use It is mainly used for the I/O process. It is utilized for reading and writing processes from the
system disk.