0% found this document useful (0 votes)
13 views

OS Unit 3 Notes

The document discusses process synchronization and solutions to the critical section problem including mutual exclusion, progress, and bounded waiting. It describes deadlock conditions like mutual exclusion, hold and wait, no preemption, and circular wait. Methods to handle deadlocks include prevention, avoidance, detection, and recovery. Prevention techniques restricting resource usage and avoidance using safety algorithms are explained.

Uploaded by

Sumit Jha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

OS Unit 3 Notes

The document discusses process synchronization and solutions to the critical section problem including mutual exclusion, progress, and bounded waiting. It describes deadlock conditions like mutual exclusion, hold and wait, no preemption, and circular wait. Methods to handle deadlocks include prevention, avoidance, detection, and recovery. Prevention techniques restricting resource usage and avoidance using safety algorithms are explained.

Uploaded by

Sumit Jha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Unit 3

Deadlock
• The critical section problem is used to design a protocol followed by a group of processes, so that when one
process has entered its critical section, no other process is allowed to execute in its critical section.
• When two processes access and manipulate the shared resource concurrently, and the resulting execution
outcome depends on the order in which processes access the resource; this is called a race condition.
• Race conditions lead to inconsistent states of data.
• Therefore, we need a synchronization protocol that allows processes to cooperate while manipulating shared
resources, which essentially is the critical section problem.
• Critical Section Problem → Semaphore
Solutions to the critical section problem
• Any solution to the critical section problem must satisfy the following requirements:
• Mutual exclusion: When one process is executing in its critical section, no other process is allowed to
execute in its critical section.
• Progress: When no process is executing in its critical section, and there exists a process that wishes to enter
its critical section, it should not have to wait indefinitely to enter it.
• Bounded waiting: There must be a bound on the number of times a process is allowed to execute in its
critical section, after another process has requested to enter its critical section and before that request is
accepted.
• The critical section contains shared variables or resources which are needed to be synchronized to maintain
the consistency of data variables.

Process Synchronization is the task of coordinating the execution of processes in a way that no two processes can
have access to the same shared data and resources.
• It is specially needed in a multi-process system when multiple processes are running together, and more than
one processes try to gain access to the same shared resource or data at the same time.
• This can lead to the inconsistency of shared data.
• So the change made by one process not necessarily reflected when other processes accessed the same shared
data.
• To avoid this type of inconsistency of data, the processes need to be synchronized with each other.
Mutual Exclusion
• The shared resources are acquired and used in a mutually exclusive manner, that is, by at most one process
at a time.
• A deadlock is a situation where a group of processes are permanently blocked as a result of each process
having acquired a subset of the resources needed for the completion and waiting for release of the
remaining resources held by others in the same group –
• Thus making it impossible for any of the processes to proceed.
• Deadlock can occur in concurrent environment as a result of uncontrolled granting of the system resources
to requesting processes.
Deadlock
• Traffic only in one direction.
• Each section of a bridge can be viewed as a resource.
• If a deadlock occurs, it can be resolved if one car backs up (preempt resources and rollback).
• Several cars may have to be backed up if a deadlock occurs.
• Starvation is possible.
• Starvation the problem that occurs when high priority processes keep executing and low priority
processes get blocked for indefinite time
• System Deadlock
• A process must request a resource before using it, and must release the resource after finishing with it.
• A set of processes is in a deadlock state when every process in the set is waiting for a resource that can only
be released by another process in the set.
Necessary Conditions for Deadlock
• Mutual Exclusion: the shared resources are acquired and used in a mutually exclusive manner, that is, by
at most one process at a time.
• Hold and wait: Each process continuous to hold resources already allocated to it while waiting to acquire
other resources.
• NO Preemption: Resources granted to a process can be released back to the system only as a result of the
voluntary action by the process; the system cannot forcefully revoke them.
• Circular Chain: Deadlock processes are involved in a circular chain such that each process holds one or
more resources being requested by the next process in the chain
HOW TO HANDLE DEADLOCKS
There are three methods:
• 1. Prevention : Prevent any one of the 4 conditions from happening.
• 2. Avoidance Allow all deadlock conditions, but calculate cycles about to happen and stop
dangerous operations..
• 3. Allow deadlock to happen. This requires using both:

Detection Know a deadlock has occurred.


Recovery Regain the resources.
Deadlock Prevention
• Do not allow one of the four conditions to occur (Eliminate one of them).
a. Mutual Exclusion
b. Hold and Wait
c. No Preemption
d. Circular Chain
• Mutual Exclusion
I. Automatically holds for printers and other non sharable.
II. Shared entities (read only files) don't need mutual exclusion (and aren’t susceptible (at risk) to deadlock.)
III. Prevention not possible, since some devices are basically non-sharable
IV. Therefore, Mutual exclusion alone not prevent deadlock

• Hold and wait
• Collect all resources before execution.
• A particular resource can only be requested when no others are being held.
• A sequence of resources is always collected beginning with the same one.
• Utilization is low, starvation(hunger) possible
• No Pre emption
1. Release any resource already being held if the process can't get an additional resource.
2. Allow preemption - if a needed resource is held by another process, which is also waiting on some resource,
steal it.
3. Otherwise wait.
Circular wait
4. One way, to prevent the circular wait condition is by linear ordering of different types of the system
resources.
5. Number resources and only request in ascending order.
6. Disadvantage of this approach is that resources must be acquired in the prescribed order, as opposed to
being requested when actually needed.
Deadlock Prevention
• EACH of these prevention techniques may cause a decrease in utilization of resources available on device.
• For this reason, prevention isn't necessarily the best technique.
• Prevention is generally the easiest to implement.
Deadlock Avoidance
• In deadlock avoidance, the necessary conditions are untouched.
Instead, extra information about resources is used by the OS to do better forward planning of
process/resource allocation
• indirectly avoids circular wait
• The basic idea of Deadlock avoidance is to grant only those requests for available resources that cannot
possibly result in a state of deadlock.
• This strategy usually implemented by examination of available resources by resource allocator depending
upon particular. requests.
• If granting of resource cannot possible lead to deadlock, the resource is granted to the requester.
• Otherwise, requesting process is suspended until such time when its pending request can be safely granted.
• This is usually after one or more resources held by other active processes are released.
• In order to evaluate the safety of the individual system states, deadlock avoidance requires all processes to
state (pre-claim) their maximum resources requirement prior to execution.
• The resource allocator keeps track of the number of allocated and the number of available resources of each
type,
• In addition to recording the remaining number of resources pre-claimed but not yet requested by each
process.
• A process that request a temporarily unavailable resource is made to wait
• If requested resource is available for allocation, the resource allocator examines whether granting of the
request can lead to deadlock by checking whether each of the already-active processes could safely
complete.
• Alternatively, granting of the resource could potentially lead to a deadlock state,
• And the resource allocator suspends the requesting process until the desired resource can safely be granted.
• A resource allocator can determine safety and track resource allocations by using variants of general
resource graph representation.
• One such representation is by means of two-dimensional matrix with processes as rows and resources as
columns.
• In this representation, matrix elements correspond to individual edges.
• The number of resources allocated indicated ALLOCATED and resource claims can be represented
CLAIMS

• The crucial part of deadlock avoidance is the safety test.


• A state is regarded to be safe if all processes already granted resources would be able to complete in some
order even if each such process were to use all resource that it entitled to.
• Thus a practical safety test must determine whether such an ordering exists.
• Banker's algorithm is a deadlock avoidance algorithm.
• It is named so because this algorithm is used in banking systems to determine whether a loan can be granted
or not.
• Consider there are n account holders in a bank and the sum of the money in all of their accounts is S.
• Every time a loan has to be granted by the bank, it subtracts the loan amount from the total money the bank
has.
• Then it checks if that difference is greater than S. It is done because, only then, the bank would have enough
money even if all the n account holders draw all their money at once.
• Banker's algorithm works in a similar way in computers.
• Whenever a new process is created, it must specify the maximum instances of each resource type that it
needs, exactly.
Resource Request
1. 1. For each resource request, verify that the issuing process is authorized to make the request by virtue
(good feature/high quality) of having sufficiently many unused claims on the requested type of resource.
a. Refuse to consider unauthorized requests
2. When a process requests a resource that is not available for allocation, suspend the calling process until its
request can be safely granted.
3. When a process requests an available resource, pretend (made-up) that the resource is granted by upadating the
ALLOCATED, CLAIMS and AVAILABLE data structure accordingly. unmark all processes.
4. Find an unmarked process i such that
CLAIMS i < = AVAILABLE
if found, mark the process i, update the AVAILABLE vector,
AVAILABLE := AVAILABLE + ALLOCATED i
and repeat this step.
when no qualifying process can be found, procedd to the next step.
5. If all processes are marked, the system state is safe, so grant the requested resource, restore the AVAILABLE
vector to its value set in step 2, and exit to the OS.
otherwise, the system state is not safe, so suspended the process; restore ALLOCATED, CLAIMS, and
AVAILABLE to their values prior to execution of step 2, instead of;
and exit to the OS
Resource Release
• When a resource is released, update the AVAILABLE data structure, and reconsider pending requests, if
any, of that resource type.
• The safety evaluation of algorithm is known as the BANKER’S ALGORITHM, and its time complexity is
proportional to r x p2
• Where p is the number of active processes and r is the number of resources
• This algorithm is used to test for safely simulating the allocation for determining the maximum amount
available for all resources.
• It also checks for all the possible activities before determining whether allocation should be continued or
not.
Deadlock Detection and Recovery
• Allow system to enter deadlock state
• Detection algorithm
• Recovery scheme
• If a system does not employ either a deadlock-prevention or deadlock-avoidance algorithm, then a deadlock
situation might occur.
• We know that a deadlock will occur if the four conditions for deadlock are satisfied.
• In this environment, the system must provide:
• An algorithm that examines the state of the system to determine whether the system is in safe state (no
deadlock situation) or a deadlock has occurred.
• An algorithm to recover from deadlock
• Recover from Deadlock: When a detection algorithm detects that a deadlock exists, then there are two
options to break the deadlock.
• One is simply to abort the one or more processes to break the circular wait.
• The other is to preempt some resources from one or more of the deadlock processes.
• Process Termination: To eliminate deadlock by aborting a process, we use one of two methods: In both
methods, the system reclaims all the resources allocated to the terminated processes.
• a. Abort all deadlock processes. This method will clearly break the deadlock cycle but at great expense.
The deadlocked processes may have computed for long time and the results of these partial computations
must be discarded and probably will have to be recomputed later.
• b. Abort one process at a time until the deadlock cycle is eliminated. This method incurs considerable
overhead, since, after each process is aborted, a deadlock-detection algorithm must be invoked to determine
whether any processes are still deadlocked
• Resource Preemption :To eliminate deadlock using resource preemption, we successively preempt some
resources from processes and give these resources to other requesting processes until the deadlock cycle is
broken.
• However three issues need to be addressed with this method:
• i. Selecting a Victim. Which resources and which processes are to be preempted to minimize the cost? Cost
factors may include such parameters as the number of resources a deadlocked process is holding and the
amount of time the process has thus far consumed during its execution.
• ii. Rollback. If we preempt a resource from a process, clearly it can not continue with its normal execution.
So we must rollback the process to some safe state and restart it from that state.
• iii. Starvation. How do we ensure that a starvation situation never occurs? That is, how do we guarantee
that resources will not always be preempted from the same process because if that happens then that process
will never finish its designated tasks.
• The most common solution is to include the number of rollbacks in the cost factor.
• In deadlock detection approaches, the resource allocator simply grants each request for an available
resource.
• When called upon to determine whether a given system state is a deadlock , the algorithm operates as
follows:
1. From ALLOCATED, REQUESTED and AVAILABLE in accordance with the system state. Unmark all
active process.
2. Find an unmarked process i such that
REQUESTED i <= AVAILABLE
if found, mark process i, update AVAILABLE,
AVAILABLE := AVAILABLE + ALLOCATED
and repeat this step. When no qualifying process can be found, proceed to the next step.
3. If all processes are marked, the system is not deadlocked.
otherwise, the system is deadlocked, and the set of unmarked processes is deadlocked.
• Deadlock detection is only a part of the deadlock handling task.
• Detecting a deadlock only reveals the existence of problem;
• The system must then break the deadlock to reclaim resources held by blocked processes and to ensure that
the affected processes can eventually be completed.
• The first step in deadlock recovery is to identify the deadlock processes.
• This give an edge to detection algorithms that provide an indication to deadlocked processes.
• The next step is to break the deadlock by rolling back or restarting one or more of deadlocked processes.
Deadlock Detection and Recovery
• Deadlock detection and recovery provides a higher potential degree of concurrency than deadlock
prevention and avoidance.
• Deadlock recovery can be attractive in systems with low probability of deadlocks
• The OS doesn't apply any mechanism to avoid or prevent the deadlocks.
• Therefore the system considers that the deadlock will definitely occur.
• In order to get rid of deadlocks, The OS periodically checks the system for any deadlock.
How to Detect Deadlock in Operating Systems?
• The OS does not use any mechanisms to avoid or prevent deadlocks in this approach.
• Therefore the system considers that the deadlock will definitely occur. In order to get rid of deadlocks, The
OS periodically checks the system for any deadlock.
• As a result, the system predicts that the deadlock will occur.
• The OS periodically scans the system for any deadlocks in order to avoid them.
• If any deadlocks are discovered, the OS will attempt to restore the system using several ways.
• The OS’s primary responsibility is to detect deadlocks.
• With the help of the resource allocation graph, the OS can detect deadlocks.
• If a cycle forms in a system with single instanced resource types, there will undoubtedly be a deadlock.
• Detecting a cycle, on the other hand, is insufficient in a graph of the multiple instanced resource type.
• By turning the resource allocation graph into the allocation matrix as well as the request matrix, we must
apply the safety algorithm to the system.
Model and Mechanisms
• A systems model describes how processes interact and what operations these processes perform, but it does
not go into details as to how these processes are implemented.
• Mechanisms are the implementations that enforce policies, and often depend to some extent on the hardware
on which the operating system runs.
• A mechanism might operate by itself, or with others, to provide a particular service.
• For instance, a processes may be granted resources using the first come, first serve policy.
• This policy may be implemented using a queue of requests.
Semaphores is a data structure that provides mutual exclusion to critical sections
• Block waiters, interrupts enabled within CS
• Described by Dijkstra in THE system in 1968
• A semaphore mechanism basically consists of two primitive operations SIGNAL and WAIT (originally
defined as P and V by Dijkstra), which operate on a specific type of semaphore variable, s.
• Semaphore are a relatively simple but powerful mechanism for ensuring mutual exclusion among concurrent
processes accessing a shared resource.
• The semaphore variable can assume integer values and except possibly for initialization, may be accessed
and manipulated only by means of SIGNAL and WAIT conditions.
• The two primitive operation defined as below:
• wait(s): decrements the value of the argument semaphore, s, as soon as it would become
nonnegative. Completion of WAIT operation, once the decision is made to decrement its argument
semaphore, must be indivisible.
• signal(s): increments the value of its argument semaphore, s, as in indivisible operation.
• The general semaphore may take any integer value.
• A binary semaphore MUTEX is used to protect the shared resource by enforcing its use in mutually
exclusively manner.
• Each process ensure the integrity of its critical section by opening it with a WAIT and closing it with a
SIGNAL, on the related semaphore.
A busy-wait implementation of WAIT and SIGNAL
• wait(s): while not ( s > 0 ) do { keeptesting}
• s : = s – 1;
• ( a ) WAIT
• signal(s): s := s +1
(b) SIGNAL
Monitors
• As far as mutual exclusion is concerned, only with making sure that at most one process is allowed to access
a shared resource at any time.
• It is the problem of mutual exclusion, because a process allowed to access the resource wrongly/incorrectly
or malevolently (unkindly) corrupt it.
• Monitors are an OS structuring mechanism that address this issue in a rigorous (carefully) and systematic
manner.
• The basic idea behind the monitors is to provide structural data abstraction in addition to concurrency
control, that is, to control not only the timing but also the nature of operations performed on global data;
• to prevent harmful or meaningless updates.
• Monitors go a significant step beyond semaphore by making the critical data accessible indirectly and
exclusively via a set of publically available procedure.
• In terms of producers/consumers problem;
• The shared global buffer may be declared as belonging to a monitor, and neither producer nor consumer
would be permitted direct access to it.
• Instead producers may be allowed to call a monitors provided public procedure, and to supply the produced
item to the buffer.
How a Monitor can be implemented with Semaphore
wait_signal : monitor
begin
busy : boolean;
free : condition;
Procedure mwait
begin
if busy then free. wait;
busy := true;
end;
Procedure msignal
begin
busy := false;
free : signal;
End;
{monitor body – initialization}
Busy := false;
End wait_signal

ME with Semaphore
• Program/module smutex
• Var mutex : semaphore; {binary}

{Parent process}
Begin (smutex)
Mutex := 1 {free}
Initialize p1, p2, p3
End{smutex}

Device management in an operating system means controlling the Input/Output devices like disk, microphone,
keyboard, printer, magnetic tape, USB ports…
• Four main functions involved in device management?
• open and close device drivers.
• communicate with device drivers.
• control and monitor device drivers.
• write and install device drivers.
• Device drivers are software programs that enable the operating system to communicate with the hardware
devices attached to the computer system.
• A device driver acts as a translator between the operating system and the hardware device, providing a
standard interface for the operating system to interact with the device.
• The function of a device driver is to facilitate communication between the operating system and the device
hardware, and to enable the operating system to control and manage the device.
Device Drivers
• There are different types of device drivers that are designed to handle different types of hardware devices.
Here are three common types of device drivers −
• Character device driver − Devices that send data character by character are controlled by character device
drivers. Keyboards, mice, printers, and terminals are some of these gadgets.
• Character device drivers function by buffering the data that is received from the hardware device until the
operating system is prepared to process it.
• Block device driver − Hard disc drives and solid-state drives are examples of devices that transfer data in
fixed-size blocks and are managed by block device drivers.
• Network device driver − Network device drivers are used to manage network interface devices such as
Ethernet cards and Wi-Fi adapters.
• Network device drivers provide the operating system with the ability to communicate with other devices on
a network.

Disk Scheduling Strategies in os


• Disc scheduling is an important process in operating systems that determines the order in which disk access
requests are serviced.
• The objective of disc scheduling is to minimize the time it takes to access data on the disk and to minimize
the time it takes to complete a disk access request.
• Disk access time is determined by two factors: seek time and rotational latency.
• Seek time is the time it takes for the disk head to move to the desired location on the disk,
• while rotational latency is the time taken by the disk to rotate the desired data sector under the disk head.
• Disk scheduling algorithms are an essential component of modern operating systems and are responsible for
determining the order in which disk access requests are serviced.
• The primary goal of these algorithms is to minimize disk access time and improve overall system
performance.
Important Terms related to Disk Scheduling Algorithms
• Seek Time - It is the time taken by the disk arm to locate the desired track.
• Rotational Latency - The time taken by a desired sector of the disk to rotate itself to the position where it
can access the Read/Write heads is called Rotational Latency.
• Transfer Time - It is the time taken to transfer the data requested by the processes.
• Disk Access Time - Disk Access time is the sum of the Seek Time, Rotational Latency, and Transfer Time.
The First-Come-First-Served (FCFS) disk scheduling algorithm is one of the simplest and most
straightforward disk scheduling algorithms used in modern operating systems.
• It operates on the principle of servicing disk access requests in the order in which they are received.
• In the FCFS algorithm, the disk head is positioned at the first request in the queue and the request is
serviced.
• The disk head then moves to the next request in the queue and services that request.
• This process continues until all requests have been serviced
• Shortest Seek Time First (SSTF) is a disk scheduling algorithm used in operating systems to efficiently
manage disk I/O operations.
• The goal of SSTF is to minimize the total seek time required to service all the disk access requests.
• In SSTF, the disk head moves to the request with the shortest seek time from its current position, services it,
and then repeats this process until all requests have been serviced.
• The algorithm prioritizes disk access requests based on their proximity (closeness) to the current position of
the disk head, ensuring that the disk head moves the shortest possible distance to service each request.

• SCAN (Scanning) is a disk scheduling algorithm used in operating systems to manage disk I/O
operations.
• The SCAN algorithm moves the disk head in a single direction and services all requests until it reaches the
end of the disk, and then it reverses direction and services all the remaining requests.
• In SCAN, the disk head starts at one end of the disk, moves toward the other end, and services all requests
that lie in its path.
• Once the disk head reaches the other end, it reverses direction and services all requests that it missed on the
way. This continues until all requests have been serviced.
• The C-SCAN (Circular SCAN) algorithm operates similarly to the SCAN algorithm, but it does not
reverse direction at the end of the disk.
• Instead, the disk head wraps around to the other end of the disk and continues to service requests.
• This algorithm can reduce the total distance the disk head must travel, improving disk access time.
• However, this algorithm can lead to long wait times for requests that are made near the end of the disk, as
they must wait for the disk head to wrap around to the other end of the disk before they can be serviced.
• The C-SCAN algorithm is often used in modern operating systems due to its ability to reduce disk access
time and improve overall system performance.

• The LOOK algorithm is similar to the SCAN algorithm but stops servicing requests as soon as it reaches
the end of the disk.
• This algorithm can reduce the total distance the disk head must travel, improving disk access time.
• However, this algorithm can lead to long wait times for requests that are made near the end of the disk, as
they must wait for the disk head to wrap around to the other end of the disk before they can be serviced.
• The LOOK algorithm is often used in modern operating systems due to its ability to reduce disk access time
and improve overall system performance.

• C-LOOK is similar to the C-SCAN disk scheduling algorithm.
• In this algorithm, goes only to the last request to be serviced in front of the head in spite of the disc arm
going to the end, and then from there it goes to the other end’s last request.
• Thus, it also prevents the extra delay which might occur due to unnecessary traversal to the end of the disk.
Rotational position optimization disk scheduling algorithms utilize seek distance versus rotational distance
information implemented as rpo tables (arrays) which are stored in flash memory within each disk drive.
Rotational position optimization disk scheduling algorithms utilize seek distance versus rotational distance
information implemented as rpo tables (arrays) which are stored in flashmemory within each disk drive. We
consider a novel representation scheme for this information reducing the required flashmemory by a factor
of more than thirty thereby reducing the manufacturing cost per drive. We present simulation results
showing the throughput for conservative and aggressive versions of the scheme as well as comparative
results with the standard production drives not using these results.
Rotational optimization is a technique used in operating systems (OS) to optimize disk I/O performance.
Rotational optimization can improve the performance in case there are number of requests to small pieces of
data randomly distributed throughout disk cylinders.
But the processes that access data sequentially tend to access entire tracks of data and thus do not benefit
much from rotational optimization.
Rotational Latency: Time for data to rotate from current position to read-write head.

Caching and Buffering

Buffering is a component of the main memory (RAM) that temporarily holds data while it is being sent between
two devices. Buffering aids in matching the data stream's transmitter and receiver speeds. If the sender's transfer
rate is slower than the receiver's, a buffer in the receiver's main memory is created which stores the bytes received
from the sender. When all of the bytes of data have arrived, the receiver has data to work with.
Buffering is also useful when the data transfer sizes of the sender and receiver differ. Buffers are used in computer
networking to fragment and reassemble data. On the sender side, the large amount of data is divided into little
packets and sent over the network. On the receiver side, a buffer is generated that gathers all the data packets and
collects them to make a large amount of data set again.

There are various features of buffering in the OS. Some features of the buffering are as follows:
1. It is a method for handling overlapping Input/Output and single-job processing. When the data is read and
the processor is about to begin processing it, the input devices are ordered to begin the next input
immediately.
2. It also supports the copy semantic process, which implies that the data version in the buffer and the data
version at the time of the system call must be the same.
3. It resolves the issue of the speed differential between the two devices used to transfer data.
What is Caching?
The cache is the processor-implemented memory that holds the original data copy. The main concept behind the
caching memory is that the recently accessed disk blocks should be saved in the cache memory so that if any user
again requires access to the same disk blocks, it may be handled locally via the cache memory eliminating the
network traffic.
Cache memory size is limited because it only stores recently used data in the memory. You may also see changes in
the original file when you change the cache file. If you need the data that is not in the cache memory, it is copied
from the source to the cached memory and made available to the user the next time the data is requested.
The cache data may also be stored on disk instead of RAM, which is more reliable. If the computer system is
destroyed, the cached data remains on the disk, but data will be lost in volatile memory, such as RAM. One main
benefit of storing cached data in the main memory is that it may be accessed quickly.
Example: Cache is utilized in systems to increase speed access to usually used data.
Advantages and Disadvantages of Caching
There are various advantages and disadvantages of caching in the operating system. Some advantages and
disadvantages of caching are as follows:
Advantages
1. It is faster than the system's main and second memory.
2. It increases the CPU's performance by storing all the information and instructions that are regularly used by
the CPU.
3. Cache memory has a faster data access time than RAM.
4. The CPU works more quickly as data access speeds increases.
Disadvantages
1. It is quite expensive than the other memory.
2. Its storage capacity is limited.
3. It holds the data temporarily.
4. If the system is turned off, the stored data in the memory is destroyed.

The most basic difference between buffering and caching is that buffering is used to sync the speed of data transmission
between senders and receive, while caching is used to increase the speed of data processing by the CPU.

Features Buffering Caching

Definition It is a component of the main memory (RAM) that temporarily holds It is the processor-implemented memory that holds the
data while it is being sent between two devices. original data copy.

Basic It matches the speed between the data stream's sender and receiver. It increases the access speed of the repeatedly used data.

Storage It stores the original copy of the data. It stores a copy of the original data.

Location A buffer is a part of the main memory (RAM). It is implemented on the processor; however, it can also be
implemented on RAM and storage.

Policy It may be implemented as a First in, First out policy It may be implemented as a Least Recently Used policy.

Use It is mainly used for the I/O process. It is utilized for reading and writing processes from the
system disk.

Type It may be a hardware and software buffer. It is a fast disk, so it is hardware.

You might also like