OS notes
OS notes
1. Deadlock
.
• Deadlock is a situation in computing where two or more processes are
unable to proceed because each is waiting for the other to release
resources.
• Key concepts include mutual exclusion, resource holding, circular wait,
and no pre-emption.
Deadlock Avoidance
Avoidance is kind of futuristic. By using the strategy of “Avoidance”,
we have to make an assumption. We need to ensure that all information
about resources that the process will need is known to us before the
execution of the process. We use Banker’s algorithm to avoid deadlock.
Deadlock Ignorance
If a deadlock is very rare, then let it happen and reboot the system. This is
the approach that both Windows and UNIX take. we use the ostrich
algorithm for deadlock ignorance.
2. Critical section
A critical section is a part of a program where shared resources (such as
variables) are accessed. To avoid conflicts, only one process should enter the
critical section at a time, while others must wait. This ensures data
consistency and prevents unpredictable behavior.
do{
flag=1;
while(flag); // (entry section)
// critical section
if (!flag)
// remainder section
} while(true);
4. Paging
Paging is a storage mechanism that allows OS to retrieve processes from the
secondary storage into the main memory in the form of pages. In the Paging
method, the main memory is divided into small fixed-size blocks of physical
memory, which is called frames. The size of a frame should be kept the same
as that of a page to have maximum utilization of the main memory and to
avoid external fragmentation. Paging is used for faster access to data, and it
is a logical concept.
Example of Paging in OS
For example, if the main memory size is 16 KB and Frame size is 1 KB.
Here, the main memory will be divided into the collection of 16 frames of 1
KB each.
There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4
KB each. Here, all the processes are divided into pages of 1 KB each so that
operating system can store one page in one frame.
At the beginning of the process, all the frames remain empty so that all the
pages of the processes will get stored in a contiguous way.
In this example you can see that A2 and A4 are moved to the waiting state
after some time. Therefore, eight frames become empty, and so other pages
can be loaded in that empty blocks. The process A5 of size 8 pages (8 KB)
are waiting in the ready queue.
In this example, you can see that there are eight non-contiguous frames
which is available in the memory, and paging offers the flexibility of storing
the process at the different places. This allows us to load the pages of process
A5 instead of A2 and A4.
In this example, you can see that there are eight non-contiguous frames
which is available in the memory, and paging offers the flexibility of storing
the process at the different places. This allows us to load the pages of process
A5 instead of A2 and A4.
What is Paging Protection?
The paging process should be protected by using the concept of insertion of
an additional bit called Valid/Invalid bit. Paging Memory protection in
paging is achieved by associating protection bits with each page. These bits
are associated with each page table entry and specify protection on the
corresponding page.
Advantages of Paging
Here, are advantages of using Paging method:
• Easy to use memory management algorithm
• No need for external Fragmentation
• Swapping is easy between equal-sized pages and page frames.
Disadvantages of Paging
Here, are drawback/ cons of Paging:
• May cause Internal fragmentation
• Page tables consume additonal memory.
• Multi-level paging may lead to memory reference overhead.
5. Hardware Synchronisation
When multiple processes share the same data or variables while executing
concurrently, synchronization problems can occur. One such issue is the
Race Condition, where a process updates a shared variable before another
process can read it, leading to inconsistent results.
To prevent race conditions, hardware synchronization mechanisms are
implemented. These mechanisms provide efficient solutions to process
synchronization problems by ensuring mutual exclusion, progress, and
bounded waiting.
Hardware-Based Synchronization Algorithms
There are three primary hardware solutions for process synchronization:
1. Test and Set
2. Swap
3. Unlock and Lock
These algorithms leverage atomic hardware instructions to manage critical
section access efficiently.
while(1) {
while (TestAndSet(lock)); // Busy waiting
// Critical section
lock = false; // Release the lock
// Remainder section
}
2. Swap Algorithm
Concept:
• Uses a shared lock and an individual key for each process.
• The swap(lock, key) function swaps the values of lock and key.
• A process enters the critical section when key becomes false after
swapping.
• Other processes must wait until the lock is released.
Key Properties:
✔ Mutual Exclusion – Ensured by swapping lock values.
✔ Progress – Once a process exits, another can enter.
✘ Bounded Waiting – No guarantee that processes will enter in order.
Pseudocode:
// Shared variable lock initialized to false
boolean lock;
boolean key;
while(1) {
key = true;
while (key)
swap(lock, key); // Busy waiting
// Critical section
lock = false; // Release the lock
// Remainder section
}
while(1) {
waiting[i] = true;
key = true;
while (waiting[i] && key)
key = TestAndSet(lock);
waiting[i] = false;
// Critical section
j = (i + 1) % n;
while (j != i && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = false;
else
waiting[j] = false;
// Remainder section
}
Conclusion
• Hardware synchronization mechanisms are effective in managing
concurrent access to shared resources.
• Test and Set and Swap ensure mutual exclusion but lack bounded
waiting.
• Unlock and Lock resolves the issue of bounded waiting by maintaining a
queue-based order.
• Choosing the right algorithm depends on system requirements and
efficiency needs.
By implementing hardware synchronization techniques, operating systems
can prevent race conditions, enhance performance, and ensure process
safety.
6. Segmentation
A process is divided into Segments. The chunks that a program is divided
into which are not necessarily all of the exact sizes are called segments.
Segmentation gives the user’s view of the process which paging does not
provide. Here the user’s view is mapped to physical memory.
Types of Segmentation in Operating Systems
• Virtual Memory Segmentation: Each process is divided into a number
of segments, but the segmentation is not done all at once. This
segmentation may or may not take place at the run time of the program.
• Simple Segmentation: Each process is divided into a number of
segments, all of which are loaded into memory at run time, though not
necessarily contiguously.
There is no simple relationship between logical addresses and physical
addresses in segmentation. A table stores the information about all such
segments and is called Segment Table.
What is Segment Table?
It maps a two-dimensional Logical address into a one-dimensional Physical
address. It’s each table entry has:
• Base Address: It contains the starting physical address where the
segments reside in memory.
• Segment Limit: Also known as segment offset. It specifies the length of
the segment.
7. Page replacement
8. Demand paging
Demand paging is a technique used in virtual memory systems where pages
enter main memory only when requested or needed by the CPU. In demand
paging, the operating system loads only the necessary pages of a program
into memory at runtime, instead of loading the entire program into memory
at the start. A page fault occurred when the program needed to access a page
that is not currently in memory.
The operating system then loads the required pages from the disk into
memory and updates the page tables accordingly. This process is
transparent to the running program and it continues to run as if the page had
always been in memory.
The operating system‘s demand paging mechanism follows a few steps in its
operation.
• Program Execution: Upon launching a program, the operating system
allocates a certain amount of memory to the program and establishes a
process for it.
• Creating Page Tables: To keep track of which program pages are
currently in memory and which are on disk, the operating system
makes page tables for each process.
• Handling Page Fault: When a program tries to access a page that isn’t in
memory at the moment, a page fault happens. In order to determine
whether the necessary page is on disk, the operating system pauses the
application and consults the page tables.
• Page Fetch: The operating system loads the necessary page into memory
by retrieving it from the disk if it is there.
• The page’s new location in memory is then reflected in the page table.
• Resuming The Program: The operating system picks up where it left off
when the necessary pages are loaded into memory.
• Page Replacement: If there is not enough free memory to hold all the
pages a program needs, the operating system may need to replace one or
more pages currently in memory with pages currently in memory. on the
disk. The page replacement algorithm used by the operating system
determines which pages are selected for replacement.
• Page Cleanup: When a process terminates, the operating system frees
the memory allocated to the process and cleans up the corresponding
entries in the page tables.
Advantages of Demand Paging
So in the Demand Paging technique, there are some benefits that provide
efficiency of the operating system.
• Efficient use of physical memory: Query paging allows for more
efficient use because only the necessary pages are loaded into memory at
any given time.
• Support for larger programs: Programs can be larger than the physical
memory available on the system because only the necessary pages will be
loaded into memory.
• Faster program start: Because only part of a program is initially loaded
into memory, programs can start faster than if the entire program were
loaded at once.
• Reduce memory usage: Query paging can help reduce the amount of
memory a program needs, which can improve system performance by
reducing the amount of disk I/O required.
Disadvantages of Demand Paging
• Page Fault Overload: The process of swapping pages between memory
and disk can cause a performance overhead, especially if the program
frequently accesses pages that are not currently in memory.
• Degraded Performance: If a program frequently accesses pages that are
not currently in memory, the system spends a lot of time swapping out
pages, which degrades performance.
• Fragmentation: Query paging can cause physical
memory fragmentation, degrading system performance over time.
• Complexity: Implementing query paging in an operating system can be
complex, requiring complex algorithms and data structures to manage
page tables and swap space.