0% found this document useful (0 votes)
1 views

OS notes

The document discusses key concepts in operating systems, including deadlock, critical sections, atomic transactions, paging, hardware synchronization, and segmentation. It outlines methods for handling deadlocks, such as prevention, detection, and ignorance, and explains the critical section problem and solutions using semaphores and mutexes. Additionally, it covers atomic transactions to ensure data integrity, the mechanics of paging for memory management, and hardware synchronization algorithms to prevent race conditions.

Uploaded by

Pallavi. V
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

OS notes

The document discusses key concepts in operating systems, including deadlock, critical sections, atomic transactions, paging, hardware synchronization, and segmentation. It outlines methods for handling deadlocks, such as prevention, detection, and ignorance, and explains the critical section problem and solutions using semaphores and mutexes. Additionally, it covers atomic transactions to ensure data integrity, the mechanics of paging for memory management, and hardware synchronization algorithms to prevent race conditions.

Uploaded by

Pallavi. V
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

OS

1. Deadlock
.
• Deadlock is a situation in computing where two or more processes are
unable to proceed because each is waiting for the other to release
resources.
• Key concepts include mutual exclusion, resource holding, circular wait,
and no pre-emption.

Methods of Handling Deadlocks in Operating System


There are three ways to handle deadlock:
1. Deadlock Prevention or Avoidance
2. Deadlock Detection and Recovery
3. Deadlock Ignorance
1. Deadlock Prevention or Avoidance
Deadlock Prevention and Avoidance is the one of the methods for
handling deadlock. First, we will discuss Deadlock Prevention, then
Deadlock Avoidance.
Deadlock Prevention
In deadlock prevention the aim is to not let full-fill one of the required
condition of the deadlock. This can be done by this method:
(i) Mutual Exclusion
We only use the Lock for the non-share-able resources and if the
resource is share- able (like read only file) then we not use the locks
here.
(ii) Hold and Wait
To ensure that Hold and wait never occurs in the system, we must
guarantee that whenever process request for resource, it does not hold
any other resources.
(iii) No Pre-emption
If a process is holding some resource and requestion other resources that
are acquired and these resources are not available immediately then the
resources that current process is holding are pre-empted.
(iv) Circular Wait:
To remove the circular wait in system we can give the ordering of
resources in which a process needs to acquire.

Deadlock Avoidance
Avoidance is kind of futuristic. By using the strategy of “Avoidance”,
we have to make an assumption. We need to ensure that all information
about resources that the process will need is known to us before the
execution of the process. We use Banker’s algorithm to avoid deadlock.

To remove the circular wait in system we can give the ordering of


resources in which a process needs to acquire.
2.Deadlock Detection
Deadlock detection is a process in computing where the system checks if there
are any sets of processes that are stuck waiting for each other indefinitely,
preventing them from moving forward.
Deadlock Recovery
There are several Deadlock Recovery Techniques:
• Manual Intervention
• Automatic Recovery
• Process Termination
• Resource Preemption
• 1. Manual Intervention
• When a deadlock is detected, one option is to inform the operator and let
them handle the situation manually.
• 2. Automatic Recovery
• An alternative approach is to enable the system to recover from
deadlock automatically. This method involves breaking the deadlock
cycle by either aborting processes or pre-empting resources.
3. Process Termination
• Abort all Deadlocked Processes
• Abort one process at a time

Deadlock Ignorance
If a deadlock is very rare, then let it happen and reboot the system. This is
the approach that both Windows and UNIX take. we use the ostrich
algorithm for deadlock ignorance.

**** Deadlock Prevention or Avoidance


Deadlock Detection and Recovery
Deadlock Ignorance
ALGORITHMS ****

2. Critical section
A critical section is a part of a program where shared resources (such as
variables) are accessed. To avoid conflicts, only one process should enter the
critical section at a time, while others must wait. This ensures data
consistency and prevents unpredictable behavior.
do{
flag=1;
while(flag); // (entry section)
// critical section
if (!flag)
// remainder section
} while(true);

Working of a Critical Section


• Entry Section: Controls access to the critical section by allowing one
process at a time.
• Critical Section: The part where shared resources are used.
• Exit Section: Releases the resources and allows the next process to enter.
Critical Section Problem
The critical section problem ensures that multiple processes can share
resources without conflicts. A proper solution must satisfy these conditions:
1. Mutual Exclusion – Only one process can be in the critical section at a
time. Others must wait.
2. Progress – If the section is free, any waiting process should be able to
enter.
3. Bounded Waiting – A process should not wait indefinitely to enter the
critical section.
Solving the Critical Section Problem
Problems like deadlock (processes getting stuck) and starvation (some
processes never getting access) can occur. To avoid this, semaphores and
mutexes are used:
• Semaphore: A variable that controls access to resources.
• Mutex: A lock that ensures only one process enters the critical section at
a time.
When a process wants to enter:
• It requests access using a semaphore or mutex.
• If the resource is free, it enters.
• If not, it waits until the resource is available.
• After finishing, it releases the resource for the next process.
Advantages of Using a Critical Section
✔ Efficient CPU Usage – Reduces unnecessary waiting and improves
system performance.
✔ Ensures Mutual Exclusion – Prevents conflicts when multiple processes
access shared resources.
✔ Simplifies Synchronization – Avoids complex resource management by
allowing one process at a time.
Disadvantages:
1. Overhead: Implementing critical sections using synchronization
mechanisms like semaphores and mutexes can introduce additional
overhead, slowing down program execution.
2. Deadlocks: Poorly implemented critical sections can lead to deadlocks,
where multiple processes are waiting indefinitely for each other to release
resources.
3. Can cause contention: If multiple processes frequently access the same
critical section, contention for the critical section can occur, reducing
performance.
3. Atomic transaction
Atomic Transactions in Operating Systems
In an operating system, maintaining data consistency and reliability is
crucial. One way to achieve this is through atomic transactions, which
ensure that a group of operations either completes fully or does not happen
at all. This prevents data corruption in case of system failures.
Key Concepts of Atomic Transactions
1. Atomic Transactions – A group of operations treated as a single unit. If
one part fails, the entire transaction is rolled back to maintain data
integrity.
2. Atomicity – Ensures that a transaction is indivisible—either all
operations succeed, or none do.
3. Consistency – Maintains system integrity by keeping data in a valid
state before and after transactions. If a transaction fails, the system rolls
back to its previous state.
4. Isolation – Prevents interference between multiple transactions. Each
transaction executes as if it were the only one running.
5. Durability – Ensures that once a transaction is committed, its changes
remain permanent, even after system failures.
6. Transaction Manager – A system component that manages
transactions, ensuring they follow the ACID properties (Atomicity,
Consistency, Isolation, Durability).

How Atomic Transactions Work


1. Transaction Begins – A set of operations starts, treated as a single unit.
2. Atomicity Check – Ensures that either all operations are completed or
none are executed. If an error occurs, everything is rolled back.
3. Consistency Management – Ensures the system stays in a valid state
before and after the transaction.
4. Isolation Handling – Prevents overlapping transactions from affecting
each other until they are completed.
5. Durability Implementation – Uses logs to store transaction progress,
ensuring recovery in case of a crash.
6. Commit or Rollback – If all steps succeed, the transaction is
committed (saved permanently). If any step fails, it is rolled back
(reverted).

1. Money Transfer Example


Scenario:
● Picture a banking app where money moves from one account to another.
Atomic Transaction:
● A user initiates a transfer, involving two operations: Deducting one
amount from the sender and adding that same quantity to the recipient.
● The principle of atomicity guarantees that both operations are either
completed with a successful outcome, updating both accounts or none at
all if there is any error.
Consistency:
● In addition, if the deduction were to succeed but then fail on the
subsequent addition, it would undo itself so as to preserve that preceding
consistent state within this system.
Isolation:
● Transactions that involve separate accounts are separated and do not
interfere. One transaction does not affect another until committed.
Durability:
● These operations are recorded in logging mechanisms so that if a failure
occures after deduction but before addition, then the system can restore
itself and remain consistent.
Example: Bank Money Transfer
Imagine transferring ₹500 from Account A to Account B:
• Step 1: Deduct ₹500 from Account A.
• Step 2: Add ₹500 to Account B.
• Step 3: If both succeed, commit the transaction.
• Step 4: If Step 2 fails, Step 1 is rolled back (money is returned to
Account A).
How ACID Properties Apply
✔ Atomicity – Both deduction and addition must happen together or not at
all.
✔ Consistency – The total amount in the system remains unchanged.
✔ Isolation – Other transactions cannot interfere while this is happening.
✔ Durability – Changes are saved even if the system crashes.
Atomic transactions are essential in operating systems, databases, and
financial systems to ensure reliability, prevent data loss, and avoid system
crashes.

4. Paging
Paging is a storage mechanism that allows OS to retrieve processes from the
secondary storage into the main memory in the form of pages. In the Paging
method, the main memory is divided into small fixed-size blocks of physical
memory, which is called frames. The size of a frame should be kept the same
as that of a page to have maximum utilization of the main memory and to
avoid external fragmentation. Paging is used for faster access to data, and it
is a logical concept.
Example of Paging in OS
For example, if the main memory size is 16 KB and Frame size is 1 KB.
Here, the main memory will be divided into the collection of 16 frames of 1
KB each.
There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4
KB each. Here, all the processes are divided into pages of 1 KB each so that
operating system can store one page in one frame.
At the beginning of the process, all the frames remain empty so that all the
pages of the processes will get stored in a contiguous way.

In this example you can see that A2 and A4 are moved to the waiting state
after some time. Therefore, eight frames become empty, and so other pages
can be loaded in that empty blocks. The process A5 of size 8 pages (8 KB)
are waiting in the ready queue.

In this example, you can see that there are eight non-contiguous frames
which is available in the memory, and paging offers the flexibility of storing
the process at the different places. This allows us to load the pages of process
A5 instead of A2 and A4.
In this example, you can see that there are eight non-contiguous frames
which is available in the memory, and paging offers the flexibility of storing
the process at the different places. This allows us to load the pages of process
A5 instead of A2 and A4.
What is Paging Protection?
The paging process should be protected by using the concept of insertion of
an additional bit called Valid/Invalid bit. Paging Memory protection in
paging is achieved by associating protection bits with each page. These bits
are associated with each page table entry and specify protection on the
corresponding page.
Advantages of Paging
Here, are advantages of using Paging method:
• Easy to use memory management algorithm
• No need for external Fragmentation
• Swapping is easy between equal-sized pages and page frames.
Disadvantages of Paging
Here, are drawback/ cons of Paging:
• May cause Internal fragmentation
• Page tables consume additonal memory.
• Multi-level paging may lead to memory reference overhead.
5. Hardware Synchronisation
When multiple processes share the same data or variables while executing
concurrently, synchronization problems can occur. One such issue is the
Race Condition, where a process updates a shared variable before another
process can read it, leading to inconsistent results.
To prevent race conditions, hardware synchronization mechanisms are
implemented. These mechanisms provide efficient solutions to process
synchronization problems by ensuring mutual exclusion, progress, and
bounded waiting.
Hardware-Based Synchronization Algorithms
There are three primary hardware solutions for process synchronization:
1. Test and Set
2. Swap
3. Unlock and Lock
These algorithms leverage atomic hardware instructions to manage critical
section access efficiently.

1. Test and Set Algorithm


Concept:
• Uses a shared variable lock initialized to false.
• The TestAndSet(lock) function returns the current value of lock and sets
it to true.
• The first process can enter the critical section since TestAndSet(lock)
returns false initially.
• Other processes wait in a loop (while(lock)) until lock is set to false
again.
Key Properties:
✔ Mutual Exclusion – Only one process can access the critical section at a
time.
✔ Progress – Once a process exits, another process can enter.
✘ Bounded Waiting – No specific order is followed; any process can enter
next.
Pseudocode:
// Shared variable lock initialized to false
boolean lock;

boolean TestAndSet(boolean &target) {


boolean rv = target;
target = true;
return rv;
}

while(1) {
while (TestAndSet(lock)); // Busy waiting
// Critical section
lock = false; // Release the lock
// Remainder section
}

2. Swap Algorithm
Concept:
• Uses a shared lock and an individual key for each process.
• The swap(lock, key) function swaps the values of lock and key.
• A process enters the critical section when key becomes false after
swapping.
• Other processes must wait until the lock is released.
Key Properties:
✔ Mutual Exclusion – Ensured by swapping lock values.
✔ Progress – Once a process exits, another can enter.
✘ Bounded Waiting – No guarantee that processes will enter in order.
Pseudocode:
// Shared variable lock initialized to false
boolean lock;
boolean key;

void swap(boolean &a, boolean &b) {


boolean temp = a;
a = b;
b = temp;
}

while(1) {
key = true;
while (key)
swap(lock, key); // Busy waiting
// Critical section
lock = false; // Release the lock
// Remainder section
}

3. Unlock and Lock Algorithm


Concept:
• Uses TestAndSet but introduces an additional array waiting[i] to track
waiting processes.
• A circular queue ensures processes access the critical section in order.
• Instead of immediately setting lock = false, the algorithm checks if other
processes are waiting.
• This guarantees bounded waiting, unlike the previous two algorithms.
Key Properties:
✔ Mutual Exclusion – Ensured via TestAndSet().
✔ Progress – A waiting process is allowed to enter once another leaves.
✔ Bounded Waiting – Processes enter in a specific order.
Pseudocode:
// Shared variables
boolean lock;
boolean key;
boolean waiting[i];

while(1) {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = TestAndSet(lock);
waiting[i] = false;

// Critical section

j = (i + 1) % n;
while (j != i && !waiting[j])
j = (j + 1) % n;

if (j == i)
lock = false;
else
waiting[j] = false;
// Remainder section
}

Comparison of Hardware Synchronization Algorithms


Mutual Bounded Complexity
Algorithm Progress
Exclusion Waiting
Test and Set Yes Yes No Low

Swap Yes Yes No Medium

Unlock and High


Yes Yes Yes
Lock

Conclusion
• Hardware synchronization mechanisms are effective in managing
concurrent access to shared resources.
• Test and Set and Swap ensure mutual exclusion but lack bounded
waiting.
• Unlock and Lock resolves the issue of bounded waiting by maintaining a
queue-based order.
• Choosing the right algorithm depends on system requirements and
efficiency needs.
By implementing hardware synchronization techniques, operating systems
can prevent race conditions, enhance performance, and ensure process
safety.

6. Segmentation
A process is divided into Segments. The chunks that a program is divided
into which are not necessarily all of the exact sizes are called segments.
Segmentation gives the user’s view of the process which paging does not
provide. Here the user’s view is mapped to physical memory.
Types of Segmentation in Operating Systems
• Virtual Memory Segmentation: Each process is divided into a number
of segments, but the segmentation is not done all at once. This
segmentation may or may not take place at the run time of the program.
• Simple Segmentation: Each process is divided into a number of
segments, all of which are loaded into memory at run time, though not
necessarily contiguously.
There is no simple relationship between logical addresses and physical
addresses in segmentation. A table stores the information about all such
segments and is called Segment Table.
What is Segment Table?
It maps a two-dimensional Logical address into a one-dimensional Physical
address. It’s each table entry has:
• Base Address: It contains the starting physical address where the
segments reside in memory.
• Segment Limit: Also known as segment offset. It specifies the length of
the segment.

Advantages of Segmentation in Operating System


• Reduced Internal Fragmentation : Segmentation can reduce internal
fragmentation compared to fixed-size paging, as segments can be sized
according to the actual needs of a process.
• Flexibility: Segmentation provides a higher degree of flexibility than
paging. Segments can be of variable size, and processes can be designed
to have multiple segments, allowing for more fine-grained memory
allocation.
• Sharing: Segmentation allows for sharing of memory segments between
processes. This can be useful for inter-process communication or for
sharing code libraries.
• Protection: Segmentation provides a level of protection between
segments, preventing one process from accessing or modifying another
process’s memory segment. This can help increase the security and
stability of the system. Disadvantages of Segmentation in Operating
System
• External Fragmentation : As processes are loaded and removed from
memory, the free memory space is broken into little pieces, causing
external fragmentation.
• Fragmentation: As mentioned, segmentation can lead to external
fragmentation as memory becomes divided into smaller segments. This
can lead to wasted memory and decreased performance.
• Overhead: Using a segment table can increase overhead and reduce
performance. Each segment table entry requires additional memory, and
accessing the table to retrieve memory locations can increase the time
needed for memory operations.
• Complexity: Segmentation can be more complex to implement and
manage than paging. In particular, managing multiple segments per
process can be challenging, and the potential for segmentation faults can
increase as a result.

7. Page replacement
8. Demand paging
Demand paging is a technique used in virtual memory systems where pages
enter main memory only when requested or needed by the CPU. In demand
paging, the operating system loads only the necessary pages of a program
into memory at runtime, instead of loading the entire program into memory
at the start. A page fault occurred when the program needed to access a page
that is not currently in memory.
The operating system then loads the required pages from the disk into
memory and updates the page tables accordingly. This process is
transparent to the running program and it continues to run as if the page had
always been in memory.

Working Process of Demand Paging


Let us understand this with the help of an example. Suppose we want to run
a process P which have four pages P0, P1, P2, and P3. Currently, in the
page table, we have pages P1 and P3.

The operating system‘s demand paging mechanism follows a few steps in its
operation.
• Program Execution: Upon launching a program, the operating system
allocates a certain amount of memory to the program and establishes a
process for it.
• Creating Page Tables: To keep track of which program pages are
currently in memory and which are on disk, the operating system
makes page tables for each process.
• Handling Page Fault: When a program tries to access a page that isn’t in
memory at the moment, a page fault happens. In order to determine
whether the necessary page is on disk, the operating system pauses the
application and consults the page tables.
• Page Fetch: The operating system loads the necessary page into memory
by retrieving it from the disk if it is there.
• The page’s new location in memory is then reflected in the page table.
• Resuming The Program: The operating system picks up where it left off
when the necessary pages are loaded into memory.
• Page Replacement: If there is not enough free memory to hold all the
pages a program needs, the operating system may need to replace one or
more pages currently in memory with pages currently in memory. on the
disk. The page replacement algorithm used by the operating system
determines which pages are selected for replacement.
• Page Cleanup: When a process terminates, the operating system frees
the memory allocated to the process and cleans up the corresponding
entries in the page tables.
Advantages of Demand Paging
So in the Demand Paging technique, there are some benefits that provide
efficiency of the operating system.
• Efficient use of physical memory: Query paging allows for more
efficient use because only the necessary pages are loaded into memory at
any given time.
• Support for larger programs: Programs can be larger than the physical
memory available on the system because only the necessary pages will be
loaded into memory.
• Faster program start: Because only part of a program is initially loaded
into memory, programs can start faster than if the entire program were
loaded at once.
• Reduce memory usage: Query paging can help reduce the amount of
memory a program needs, which can improve system performance by
reducing the amount of disk I/O required.
Disadvantages of Demand Paging
• Page Fault Overload: The process of swapping pages between memory
and disk can cause a performance overhead, especially if the program
frequently accesses pages that are not currently in memory.
• Degraded Performance: If a program frequently accesses pages that are
not currently in memory, the system spends a lot of time swapping out
pages, which degrades performance.
• Fragmentation: Query paging can cause physical
memory fragmentation, degrading system performance over time.
• Complexity: Implementing query paging in an operating system can be
complex, requiring complex algorithms and data structures to manage
page tables and swap space.

9. Storage device management

You might also like