0% found this document useful (0 votes)
10 views

Memory Management and Virtual Memory

Memory management is a key function of operating systems that allows for the efficient use of limited memory resources when running multiple processes simultaneously. It involves subdividing memory space among processes, allocating and deallocating memory as needed, and preventing issues like fragmentation. Different techniques exist for memory management, like paging, segmentation, and contiguous allocation, each with their own advantages and disadvantages depending on the system context.

Uploaded by

findyandx
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Memory Management and Virtual Memory

Memory management is a key function of operating systems that allows for the efficient use of limited memory resources when running multiple processes simultaneously. It involves subdividing memory space among processes, allocating and deallocating memory as needed, and preventing issues like fragmentation. Different techniques exist for memory management, like paging, segmentation, and contiguous allocation, each with their own advantages and disadvantages depending on the system context.

Uploaded by

findyandx
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 130

</

Memory Management
/>

Group 5
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</Background: What is Memory Management

In a multiprogramming computer, the Operating System resides in a part of


memory, and the rest is used by multiple processes.

The task of subdividing the memory among different processes is called


Memory Management. Memory management is a method in the operating
system to manage operations between main memory and disk during process
execution.

The main aim of memory management is to achieve efficient utilization of


memory
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</Key Functions
- Allocating and deallocating memory before and after process execution.

- Keeping track of used memory space by processes.

- Minimizing fragmentation issues.

- Ensuring proper utilization of main memory.

- Maintaining data integrity during process execution

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</Why Memory Management Is Required
1. Subdividing Memory

In a multiprogramming computer, the operating system resides in part of memory, leaving the rest
for multiple processes.

2. Efficient Utilization

Memory management ensures efficient use of memory by allocating and deallocating memory
before and after process execution.

3. Fragmentation

It minimizes fragmentation issues (both internal and external) to maintain optimal memory
utilization.

4. Data Integrity

Proper memory management maintains data integrity during process execution


1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</Memory Management Techniques

- Various methods exist for memory management, each with its own
algorithmic approach.

- Examples include paging, segmentation, and contiguous memory


allocation etc.

- The effectiveness of each technique depends on the system's context and


requirements.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</Advantages
1. Efficient Utilization of Memory

- Memory management ensures that available memory is used optimally by allocating


and deallocating memory as needed.

- It prevents memory wastage and allows multiple processes to share memory resources
effectively.

2. Dynamic Allocation

- Memory management enables dynamic allocation of memory during program


execution.

- Processes can request memory as required, leading to flexibility and efficient resource
utilization.
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</Advantages cont’d
3. Fragmentation Handling

- By managing memory, fragmentation issues are addressed.

- Internal fragmentation occurs when allocated memory is larger than needed, but
memory management can mitigate this.

- External fragmentation is minimized through techniques like compaction and


paging.

4. Support for Multiprogramming

- Memory management facilitates running multiple processes concurrently.

- It ensures that each process gets its fair share of memory, allowing efficient
multitasking.
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</Disadvantages
1. Complexity

- Implementing memory management algorithms can be intricate.

- Different methods (e.g., paging, segmentation) have varying complexities,


impacting system design and debugging.

2. Overhead

- Memory management introduces overhead due to bookkeeping, data


structures, and algorithms.

- This overhead affects system performance and resource utilization.


1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</ More Disadvantages

3. Fragmentation

- Despite efforts to minimize fragmentation, it can still occur.

- Internal fragmentation arises when memory blocks are not fully utilized.

- External fragmentation results from gaps between allocated memory


segments.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ More Disadvantages

4. Virtual Memory Trade-offs

- Virtual memory allows processes to use more memory than physically


available.

- However, it can slow down the system due to constant data transfers
between RAM and the hard disk.

- There's a risk of data loss or corruption if the hard disk fails during these
transfers.
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</Contiguous Memory Allocation

● Importance of Memory Allocation: Memory allocation is crucial for efficient


utilization of computer resources, enabling processes to execute smoothly.
● Definition of Contiguous Memory Allocation: It refers to the technique
where a process must occupy a single contiguous block of memory for
execution.
● Significance: Ensures efficient memory usage and simplifies memory
management.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</Variable(Dynamic) Partitioning

● Definition: Variable (Dynamic) Partitioning is a method within Contiguous Memory


Allocation where memory partitions are created and resized dynamically during
runtime.
● Working Principle:
● Memory partitions are created as processes are loaded into memory.
● Partition size adjusts to match the size of incoming processes, minimizing wasted
space.
● Flexibility: Allows for optimal utilization of memory resources by accommodating
processes of varying sizes.
● Implementation Challenges: Requires sophisticated memory management algorithms to
handle dynamic partitioning efficiently.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</Variable(Dynamic) Partitioning
● Definition: Variable (Dynamic) Partitioning is a method within Contiguous
Memory Allocation where memory partitions are created and resized
dynamically during runtime.
● Working Principle:
● Memory partitions are created as processes are loaded into memory.
● Partition size adjusts to match the size of incoming processes,
minimizing wasted space.
● Flexibility: Allows for optimal utilization of memory resources by
accommodating processes of varying sizes.
● Implementation Challenges: Requires sophisticated memory management
algorithms to handle dynamic partitioning efficiently.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</Advantages
Elimination of Internal Fragmentation:
● Space is allocated precisely according to process needs, preventing the occurrence of
unused memory within partitions.
● Ensures that the entirety of allocated memory is utilized effectively.
Enhanced Multiprogramming Capability:
● Facilitates the concurrent execution of multiple processes by utilizing available
memory efficiently.
● Enables the system to handle a larger number of processes simultaneously.
No Limitation on Process Size:
● Unlike Fixed Partitioning, where process size is restricted by partition size, Variable
Partitioning accommodates processes of any size.
● Allows for the execution of larger processes without constraints.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</Disadvantages
Complex Implementation:
● Dynamic partitioning requires more complex memory management
mechanisms compared to fixed partitioning.
● Involves dynamic allocation and deallocation of memory during
runtime, which can be challenging to implement and manage.
Potential for External Fragmentation:
● Despite efficient space utilization within partitions, external
fragmentation may occur due to the non-contiguous arrangement of
memory blocks.
Can lead to inefficient memory allocation and fragmentation over time..

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</Illustration of External Fragmentation

● Example Scenario: Consider a scenario where processes of varying sizes are


loaded and executed in memory.
● Outcome: Over time, as processes complete execution and memory is
deallocated, the remaining free memory may become fragmented into small,
non-contiguous blocks.
● Impact: This fragmentation can hinder the allocation of larger processes,
even if sufficient free memory is available, due to the lack of contiguous
space.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</Illustration of External Fragmentation

● Example Scenario: Consider a scenario where processes of varying sizes are


loaded and executed in memory.
● Outcome: Over time, as processes complete execution and memory is
deallocated, the remaining free memory may become fragmented into small,
non-contiguous blocks.
● Impact: This fragmentation can hinder the allocation of larger processes,
even if sufficient free memory is available, due to the lack of contiguous
space.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</Fixed Partitioning

● Definition: Fixed Partitioning is a memory allocation technique where the


main memory is divided into fixed-size partitions before the execution of
processes.
● Objective: Allocate memory to multiple processes while ensuring isolation
and protection.
● Historical Context: One of the earliest memory management techniques
used in computer systems.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Working Principle of Fixed Partitioning

● Partition Creation: Partitions are established before the execution of


processes, either manually or during system configuration.
● Partition Size: Each partition may have a different size, but once defined, it
remains fixed throughout execution.
● Allocation Method: Processes are loaded into partitions based on their size
and availability of free space.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Illustration of Fixed Partitioning

● Example Scenario: Main memory is divided into partitions of varying sizes


(e.g., 4MB, 8MB, 16MB).
● Internal Fragmentation: Unused space within each partition due to process
size mismatch (e.g., a process requiring 1MB in a 4MB partition).
● External Fragmentation: Unallocated space that cannot be utilized due to
non-contiguous allocation, leading to wasted memory.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Advantages
Ease of Implementation:
● Straightforward algorithms make implementation simple, reducing development time and
complexity.
Low Overhead:
● Minimal runtime overhead in managing fixed-size partitions, making it suitable for resource-
constrained systems.
Predictable Allocation:
● Each process receives a predetermined amount of memory, ensuring predictable performance
and resource utilization.
Elimination of External Fragmentation:
● Fixed partitions prevent external fragmentation by allocating memory in contiguous blocks,
simplifying memory management.
Isolation and Protection:
● Processes are isolated within their respective partitions, preventing interference and enhancing
system stability and security.
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</ Disadvantages
Internal Fragmentation:
● Inefficient use of memory due to whole partitions being allocated to processes,
leading to wasted space.
Limitation on Process Size:
● Processes larger than partition size cannot be accommodated, limiting the flexibility
and scalability of the system.
Limitation on Multiprogramming:
● Fixed number of partitions restricts the number of processes that can be executed
simultaneously, reducing system throughput and responsiveness.
Rigidity in Memory Management:
● Inflexible allocation of fixed-size partitions may lead to suboptimal resource
utilization and difficulty in accommodating varying workload demands.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Non-Contiguous Memory Allocation

● Non-Contiguous Memory Allocation is a memory management technique


where a process is allocated memory in non-contiguous blocks.
● Unlike contiguous allocation, non-contiguous allocation allows a process to
be spread across different areas of memory.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Types of Non-Contiguous Memory
Allocation

Paging: Divides memory into fixed-size blocks called pages, allowing


processes to be allocated across multiple pages.
Segmentation: Divides memory into logical segments, each with its own size
and attributes, allowing processes to occupy multiple segments.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Paging

● Definition: Memory is divided into fixed-size blocks called pages, while


processes are divided into blocks of the same size called page frames.
● Allocation Method: Pages of processes are allocated to available page
frames in memory, allowing for non-contiguous allocation.
● Page Table: Maintains the mapping between logical addresses and physical
addresses, facilitating address translation during memory access.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Segmentation
● Definition: Memory is divided into logical segments, each representing a
different aspect of a program (e.g., code, data, stack).
● Allocation Method: Segments of processes are allocated to available
segments in memory, allowing for non-contiguous allocation.
● Segment Table: Maintains the mapping between logical segments and
physical addresses, enabling address translation during memory access.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Advantages Non-Contiguous

Flexible Memory Management: Allows processes to be allocated memory in


non-contiguous blocks, accommodating varying memory requirements.
Reduced Fragmentation: Helps minimize internal fragmentation by allowing
processes to occupy memory spaces more efficiently.
Improved Memory Utilization: Enhances memory utilization by enabling
optimal allocation of memory resources to processes.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Disadvantages Non-Contiguous
Increased Overhead: Requires additional data structures such as page tables or
segment tables for address translation, leading to increased overhead.
Complex Memory Management: Involves complex algorithms for memory
allocation and address translation, making it more challenging to implement
and manage.
Potential for Fragmentation: While it reduces internal fragmentation, non-
contiguous allocation may lead to external fragmentation over time as
memory becomes fragmented.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


PAGING
Paging is a function of memory management where a
computer will store and retrieve data from a device's
secondary storage to the primary storage.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


-Paging is to divide each process in the form of pages.
- The main memory will also be divided in the form of frames.
-Pages of the process are brought into the main memory only when they are
required.
-Logical address space is divided into equal size pages.
-Physical address space is divided into equal size frames.
-Page Size = Frame Size
SEGMENTATION
-Segmentation is a memory management technique in which the memory is
divided into the variable size parts.
- Each part is known as a segment which can be allocated to a process.
-The details about each segment are stored in a table called a segment table.
Segment table contains mainly two information about segment:
1. Base: It is the base address of the segment
2. Limit: It is the length of the segment.
CPU generates a logical address which contains two parts:
1. Segment Number
2. Displacement

-The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the displacement. If the displacement is less than the
limit then the address is valid otherwise it throws an error as the address is invalid.
-In the case of valid addresses, the base address of the segment is added to the
displacement to get the physical address of the actual word in the main memory.
ADVANTAGES
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.

DISADVANTAGES
6. It can have external fragmentation.
7. it is difficult to allocate contiguous memory to variable sized partition.
8. Costly memory management algorithms.
DIFFERENCE BETWEEN
PAGING AND SEGMENTATION
PAGING SEGMENTATION
● Closer to Operating System. ● Closer to User.
● Suffers from internal
fragmentation. ● Suffers from external
● It is faster. fragmentation.
● Logical address is divided into ● It is slower.
page number and page offset.
● Page table is used to maintain ● Logical address is
the page information. divided into segment
number and segment
offset.
● Segment table is used to
maintain the segment
information
</Paging vs Segmentation

Paging Segmentation
•Closer to Operating System. •Closer to User.

•Suffers from internal fragmentation. •Suffers from external fragmentation.

•It is faster. •It is slower.

•Logical address is divided into page •Logical address is divided into segment number
number and page offset. and segment offset.
</ Inverted Paging

Inverted Paging is a memory management technique used in virtual memory


systems to map logical addresses to physical addresses.
It's an alternative to traditional page tables, especially beneficial for systems
with large address spaces.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Inverted Paging

Principle: Instead of each process having its page table, a global page table is
maintained.
Data Structure: Inverted Page Table (IPT) is used to map each page to the
corresponding frame in physical memory.
Mapping: Each entry in the IPT contains the page number, process ID, and
additional control information.ystems with large address spaces.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</ How it works
Address Translation:
○ When a process accesses a virtual address, the page number is extracted.
○ The IPT is searched for the corresponding entry containing the frame number.
○ The physical address is then constructed using the frame number and offset.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</ Advantages of Inverted Paging

Reduced Memory Overhead:


○ Requires only one page table for the entire system, reducing memory
overhead.
Efficient Use of Memory:
○ Allows for efficient memory utilization, particularly in systems with a
large number of processes.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Disadvantages of Inverted Paging

Search Overhead:
○ Searching the IPT for page mappings can introduce overhead,
especially in systems with a large number of frames.
Concurrency Control:
○ Ensuring concurrent access to the IPT is properly managed to prevent
data corruption and race conditions.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Example

● Consider a system with 4 processes and 8 frames in physical


memory.
● Each process has multiple pages mapped to frames using the IPT.
● Accessing a virtual address involves searching the IPT for the
corresponding frame number.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Page Tables: Techniques and
Implementation
we explore the intricacies of page table structuring, including
hierarchical paging, hashed page tables, and inverted page tables,
along with real-world examples from various CPU architectures.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Hierarchical Paging

● Modern computer systems often encounter the challenge of managing large


logical address spaces efficiently. Hierarchical paging addresses this
challenge by dividing the page table into smaller, more manageable pieces.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Two-Level Paging
● In a two-level paging scheme, the page table itself is paged, creating a
hierarchical structure.
● Consider a system with a 32-bit logical address space and a 4 KB page size.
The logical address is divided into a 10-bit page number and a 12-bit page
offset.
● Address translation works by first accessing the outer page table, then the
inner page table, before resolving the final physical address.
● This approach, also known as forward-mapped page tables, allows for
efficient memory management by breaking down the page table into smaller
chunks.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Example

● The VAX minicomputer from Digital Equipment Corporation (DEC) utilized


a variation of two-level paging. The logical address space was divided into
sections, each with its own page table, reducing memory overhead and
improving efficiency.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Hash Tables

● Address spaces larger than 32 bits pose a challenge for traditional page table
structures. Hashed page tables offer a solution by employing hash functions
to map virtual page numbers to physical page frames efficiently.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Key features of Hash Tables

● Hashed page tables consist of a hash table, where each entry contains a
linked list of elements that hash to the same location.
● Each element typically includes the virtual page number, the mapped page
frame value, and a pointer to the next element in the list.
● Address translation involves hashing the virtual page number to locate the
corresponding physical page frame, handling collisions efficiently.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Example

● Clustered page tables, a variation of hashed page tables, store mappings for
multiple physical page frames within a single page-table entry. This
approach is particularly useful for sparse address spaces, where memory
references are noncontiguous.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Overlays

● Overlays are a technique employed in memory management to efficiently


utilize memory resources by swapping portions of memory between
programs or data as needed. This technique allows larger programs to run
smoothly on systems with limited physical memory.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Concept of Overlays

● The fundamental idea behind overlays is to load only the necessary parts of
a program into memory at any given time, freeing up memory for other
tasks. When a specific part of the program is required for execution, it is
loaded into memory, and once it's no longer needed, it is unloaded, making
space for new parts as necessary. This process is akin to swapping memory
blocks between disk storage and RAM.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Advantages of Overlays
● Increased Memory Utilization: Overlays enable multiple programs to share the same physical
memory space, thereby maximizing memory utilization and reducing the need for additional
memory hardware.
● Reduced Load Time: By loading only the essential parts of a program into memory, overlays
reduce load time significantly, enhancing overall system performance.
● Improved Reliability: Overlays mitigate the risk of memory overflow, which can lead to system
crashes or data loss, thus enhancing system reliability.
● Reduced Memory Requirement: Since only the required portions of a program are loaded into
memory, overlays help minimize the overall memory footprint of the system.
● Reduced Time Requirement: By efficiently managing memory resources, overlays contribute to
faster program execution and responsiveness.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Disadvantages of Overlays
● Complexity: Implementing and managing overlays can be complex, especially for large programs with
intricate memory requirements, leading to increased development and maintenance efforts.
● Performance Overhead: The process of loading and unloading overlays incurs additional CPU and disk
usage, potentially impacting system performance, especially on resource-constrained systems.
● Compatibility Issues: Overlays may not be compatible with all hardware and software configurations, posing
challenges in ensuring consistent performance across different systems.
● Programmer Dependency: Overlays require programmers to specify overlap maps and understand the
memory requirements of their programs, adding complexity to the development process.
● Structural Constraints: Overlapping modules must be completely disjoint, and the programming design of
overlay structures can be intricate, limiting its applicability in certain scenarios.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


1011 011 01 1011001 10 11011 011 01 110110 110111 1101
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
Multilevel Paging in Operating System

● Introduction to Multilevel Paging - Multilevel paging is a hierarchical paging scheme with two or more levels of
page tables. - Each level's page table entry points to the next level's page table, with the last level storing actual
frame information. - Level 1 page table is stored in the Page Table Base Register (PTBR)

● Why Multilevel Paging is Required - Multilevel paging is used to address the large memory requirements of page
tables. - With a 32-bit physical address space and a large number of page table entries, storing all page tables in
physical memory leads to memory wastage. - Multile

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


Benefits of multilevel paging

Efficient memory utilization: Only the outermost page table needs to be in main memory, while other page tables are
brought in as required. - Memory savings: Storing only the outermost page table in a single frame reduces memory
overhead. - Flexibility: Multilevel paging allows varying the size and number of page tables based on system
requirements.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


Levels in paging

● In multilevel paging whatever may be levels of paging, all the page tables will be stored in the main memory. So it
requires more than one memory access to get the physical address of the page frame. One access for each level is
needed. Each page table entry except the last level page table entry contains the base address of the next level page
table
Levels in Paging

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


Example

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


● Reference to actual page frame: . Reference to PTE in level 1 Page Table = PTBR Value + Level 1 Offset present in
the Virtual Address. . Reference to PTE in level 2 Page Table = Base address (present in Level 1 PTE) + Level 2
offset (present in VA). . Reference to PTE in level 3 Page Table= Base address (present in Level 2 PTE) + Level 3
offset (present in VA). · Actual page frame address = PTE (present in level 3).

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


1011 011 01 1011001 10 11011 011 01 110110 110111 1101
Conclusion

● - Multilevel paging is a hierarchical paging scheme used to address the large memory requirements of page tables. -
By bringing page tables into memory as needed, it saves memory space and improves memory utilization. - It
provides flexibility and efficient management of page tables in operating systems.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Page Table Entries

● Page table entries (PTEs) are data structures used in virtual memory systems
to map virtual addresses to physical addresses. Each PTE contains
information about a specific page or frame in memory, enabling the
operating system to manage memory efficiently.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Page Table Entries

● Page table entries (PTEs) are data structures used in virtual memory systems
to map virtual addresses to physical addresses. Each PTE contains
information about a specific page or frame in memory, enabling the
operating system to manage memory efficiently.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Contents of PTE include: Page Frame Number
Page Frame Number (PFN): The PFN is a crucial component of a PTE. It holds
the physical address of the page frame where the corresponding virtual page
resides. In systems that use paging, memory is divided into fixed-size blocks
called pages. When a virtual address needs to be translated into a physical
address, the PFN is used to locate the physical page frame in memory. The
size of the PFN field depends on the physical memory size supported by the
system. For example, in a system with 32-bit physical addresses, the PFN
might be 20 bits long, allowing for addressing up to 2^20 (1,048,576) page
frames.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Contents of PTE include: Page Attributes
PTEs often include flags or attributes that specify various properties of the associated page. These attributes may
include:
● Valid/Invalid: Indicates whether the page is currently mapped to a physical frame in memory. If the page is
valid, the PFN field is meaningful; otherwise, the page is considered invalid and cannot be accessed.
● Protection Level: Specifies the access permissions for the page, such as read-only, read-write, execute-only,
or no access. These permissions help enforce memory protection and prevent unauthorized access to
memory regions.
● Cacheability: Determines whether the page can be cached in processor caches for faster access. Cacheable
pages may improve performance, but certain types of data (e.g., I/O buffers) may need to be marked as
non-cacheable to ensure data consistency.
● Page Type: Indicates the type of page, such as a regular data page, a code page, a kernel page, or a special-
purpose page (e.g., for I/O operations or memory-mapped devices).
Page Table Entry Type: Specifies the type of page table entry (e.g., page table, page directory, or page directory
pointer) to facilitate efficient traversal of the page table hierarchy during address translation..

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Access Control Bits
These bits govern the access permissions for the associated page. Common
access control bits include:
● Read Bit (R): Allows reading data from the page.
● Write Bit (W): Permits writing data to the page.
● Execute Bit (X): Indicates whether code execution is allowed from the
page.
● User/Supervisor Bit: Determines whether the page can be accessed by
user-mode processes or only by privileged (kernel-mode) code.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Dirty Bits
Also known as the "modified" or "dirty" flag, this bit signals whether the contents of the page have been modified
(written to) since it was last loaded into memory. Operating systems use the dirty bit to track pages that need to be
written back to disk before eviction from memory, ensuring data integrity.
Reference Bit: The reference bit, often referred to as the "used" or "accessed" flag, indicates whether the page has been
accessed (read or written) since the reference bit was last cleared. Hardware or software mechanisms periodically
reset the reference bit, allowing the operating system to identify frequently accessed pages for optimization
purposes, such as prefetching or page migration.
Additional Metadata: Depending on the memory management requirements of the system and the design decisions
made by the operating system developers, PTEs may contain additional metadata fields. This metadata could
include timestamps (to track page access times), ownership information (to identify the process that owns the
page), or memory management-related data (such as page residency status or page aging information).

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ By combining these components, page table entries provide the necessary information for
virtual memory management, including address translation, memory protection, access control,
and page replacement. The precise structure and contents of PTEs vary across different hardware
architectures and operating systems, reflecting the diverse requirements and optimization
strategies employed in modern memory management systems.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Logical vs Physical Address Space

The concepts of logical and physical address spaces are fundamental to


understanding how memory management works in computer system:

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Logical Address Space

• Think of this as the virtual view of memory that a process sees.


• Each process believes it has its own chunk of memory starting from address
0 and extending up to its maximum allocated memory size.
• Logical addresses are generated by the CPU during program execution.
• These addresses are managed by the operating system and translated into
physical addresses.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Characteristics

Contiguity: Logical addresses are typically contiguous and uniformly spaced,


meaning that each address is sequentially numbered starting from 0 up to the
maximum addressable size.
Virtual Nature: Logical addresses are virtual addresses, as they are generated
by the CPU and used by the program, but they do not directly correspond to
physical memory locations.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


Physical Address Space

• This is the actual hardware memory available in the system.


• It's the physical location of where data is stored in RAM (Random Access
Memory).
• The physical address space is the tangible memory chips on the computer's
motherboard or attached to it.
• Each physical address directly corresponds to a location in the RAM.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Difference

• Logical address space is what a program "thinks" it has, while physical


address space is what actually exists in the hardware.
• Logical addresses need to be translated into physical addresses by the
memory management unit (MMU) of the CPU, which is handled by the
operating system.
• The operating system's job is to map logical addresses to physical addresses
efficiently, managing processes' access to the physical memory and ensuring
they don't interfere with each other.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Usage of Physical Address Space

The operating system and hardware use physical addresses to access and
manipulate data stored in memory modules. From the perspective of the
hardware, the physical address space represents the entire range of memory
available in the system.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Characteristics

Actual Locations: Physical addresses represent the real locations of memory


cells in the system's RAM or other storage devices, such as disk drives or
SSDs.
Non-Contiguity: Physical addresses may not be contiguous, especially in
systems that use memory management techniques like paging or
segmentation.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Comparison

It's not accurate to say that one address space is inherently "better" than the
other because they serve different purposes and exist for different reasons

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Translation Techniques

Paging: In paging, the logical address space and physical address space are
divided into fixed-size blocks called pages and page frames, respectively.
The MMU uses page tables to map logical pages to physical page frames.
Segmentation: Segmentation divides the logical address space and physical
address space into variable-sized segments. Each segment is mapped to a
segment descriptor that specifies its base address and size.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Transparent Translation

The MMU performs the translation transparently to the executing program,


allowing it to access its data and instructions without being aware of the
underlying physical memory organization.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Key Differences
Nature: Logical addresses are virtual and generated by the CPU, while physical
addresses are real and represent actual memory locations.
Contiguity: Logical addresses are typically contiguous, while physical
addresses may not be due to memory management techniques.
Usage: Programs use logical addresses for memory access, while the operating
system and hardware interact with physical addresses for memory
management and data manipulation.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Internal and External Fragmentation

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Internal Fragmentation
● Internal fragmentation occurs when the allocated memory space within a block
is larger than what is required by the process or data being stored.
● Causes:
● Fixed-size allocation: In systems where memory is allocated in fixed-size
blocks (e.g., fixed partitioning), if the requested memory size is smaller
than the block size, the remaining space within the block is wasted.
● Alignment requirements: Memory allocation algorithms often align
memory blocks to meet hardware or software requirements, leading to
unused space within allocated blocks.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Example

● Consider a memory allocation system with fixed-size blocks of 4 KB. If a


process requests only 2 KB of memory but is allocated a full block, the
remaining 2 KB within the block is wasted, resulting in internal fragmentation.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Impact

● Decreased memory utilization: Internal fragmentation reduces the effective use


of memory resources, leading to lower overall memory utilization.
● Increased memory overhead: Wasted memory within allocated blocks adds to the
memory overhead of the system, reducing its efficiency

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Mitigation

● Dynamic resizing: Systems can dynamically adjust the size of memory blocks
based on the actual requirements of the processes or data being stored,
minimizing wasted space.
● Memory pooling: Pooling smaller memory requests together can help utilize the
remaining space within blocks more effectively.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ External Fragmentation

● External fragmentation occurs when there is enough total free memory to satisfy
a memory request, but the available memory is fragmented into small, non-
contiguous blocks, making it impossible to allocate a contiguous block of
memory.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ External Fragmentation
● External fragmentation occurs when there is enough total free memory to satisfy
a memory request, but the available memory is fragmented into small, non-
contiguous blocks, making it impossible to allocate a contiguous block of
memory

● Causes:
● Dynamic memory allocation: Allocating and deallocating memory over
time can lead to small gaps or "holes" between allocated memory blocks.
Variable-size allocation: In systems where memory blocks are of varying sizes (e.g.,
dynamic partitioning), fragmentation can occur as blocks are allocated and
deallocated unevenly.
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</ Example

● Suppose a system has several small free memory blocks scattered throughout
memory. Even though the total free memory might be sufficient to fulfill a
request, if these blocks are not contiguous, the request cannot be satisfied,
resulting in external fragmentation.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Impact

● Reduced memory availability: External fragmentation can lead to situations


where there is enough free memory overall, but not in a contiguous block,
preventing certain memory requests from being fulfilled.
● Increased overhead: Managing fragmented memory requires additional
overhead, such as searching for available blocks and potentially performing
memory compaction operations.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Mitigation

● Compaction: Memory compaction involves reorganizing memory to consolidate


small free blocks into larger contiguous blocks, reducing fragmentation.
● Coalescing: Coalescing involves merging adjacent free blocks to form larger
contiguous blocks, reducing fragmentation without the need for full memory
compaction.
● Buddy allocation: Buddy allocation algorithms divide memory into blocks of
varying sizes and use a buddy system to merge adjacent free blocks, reducing
fragmentation.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Comparison
● Location of Wasted Memory:
● Internal fragmentation occurs within allocated memory blocks, where unused space exists.
● External fragmentation occurs between allocated memory blocks, where gaps or "holes" exist.
● Cause:
● Internal fragmentation is caused by the allocation of memory blocks that are larger than necessary.
● External fragmentation is caused by the allocation and deallocation of memory blocks over time,
leading to non-contiguous free memory spaces.
● Management:
● Internal fragmentation can be managed by selecting appropriate memory allocation strategies or
by using memory allocation algorithms that minimize wasted space.
● External fragmentation may require more complex memory management techniques, such as
compaction (reorganizing memory to eliminate gaps) or memory compaction (merging adjacent
free blocks).

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


VIRTUAL MEMORY

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


WHAT IS VIRTUAL
MEMORY?
● A memory management technique that is used by ● Diagrammatic Virtual Memory Memory
map
operating systems to provide an illusion of a larger Logical Frame
Memory Number
logical or virtual memory space whilst there is less
storage physically. 0 A 0 4 1
A B C D
1 B 1 0
● The whole program is partially installed therefore it F G H
E 2 C 2 7 0
allows programs to execute as if they have more 3
I J K L D 3 0
memory storage that the physically installed on the 4 E 4 0
computer system M N O P 5 F 5 0

● This allows the program execution to be faster and Secondary 6 G 6 3 1


Memory
also multiprogramming is made possible(many 7 H 7 1 1

programs are executed simultaneously) Main


Memory

H G A C
0 1 2 3 4 5 6 7 8 9
VIRTUAL MEMORY KEY WORDS
● Pages and Page Tables: VM divides the virtual ● Page Replacement: When physical memory
address space and physical memory into fixed- becomes full, the operating system must choose
which pages to evict from memory to make room
size blocks called pages. Page tables are for new pages. Page replacement algorithms,
structures used by the operating system to track such as the least recently used (LRU) algorithm,
the mapping between virtual pages and physical are used to determine which pages to evict based
memory pages. on their recent usage patterns.

● Demand Paging: Virtual memory systems often


● Page Faults: When a program accesses a virtual employ demand paging, which means that pages
address that is not currently mapped to a physical are loaded into memory only when they are
address in memory, a page fault occurs. The actually accessed. This allows programs to use
operating system handles the page fault by more memory than is physically available, as
fetching the required page from disk into only the necessary pages are brought into
physical memory and updating the page table memory when needed.
accordingly.
ADVANTAGES & DISADVANTAGES
Advantages Disadvantages
● Page will be brought into the main ● When the page required isn’t in the main
memory only when required memory there’ll be a page fault
● There is less I/O needed ● Takes time to solve page fault
● Less memory used since only required part
of the program is uploaded to the main
memory
● Fast response
● More storage meaning more users
● More storage leading to
multiprogramming
DEMAND PAGING
❖ Virtual memory can be implemented by demand paging
and demand segmentation.
❖ Demand paging is a process which resides in the
secondary memory and pages are loaded only when
they’re in demand.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


DEMAND PAGING
EXAMPLE OF DEMAND PAGING When the CPU demands process
1(P1), there’s a swap in from the
secondary memory to the main
memory via the memory map. All
other processes in the MM will be
transferred back to the CPU and this
is called swap out.

DEMAND PAGING
When the CPU calls out for process
3(I-R) to the MM, there will be a
page fault(P3 not found in MM). So
the CPU transfers a control signal to
the OS which checks if there is
space in MM to swap in P3. If
there’s space P3 is swapped in but in
this case, P1 & P2 will be both
swapped out & then the swapping in
of P3 is carried out
PERFORMANCE OF DEMAND PAGING
● Page fault rate exists in this range: 0 <= P <= 1
If P = 0, there’s no page fault
If P = 1, there exists a page fault
● The performance of demand paging is often measured in terms of the effective access time.
● Effective access time is the amount of time it takes to access memory, if the cost of page faults are
amortized over all memory accesses.
● In some sense it is an average or expected access time.
EAT = (1 - P) * MAT + P*PFT
EAT = effective access time
MAT = physical memory (core) access time
PFT = page fault time
P = probability of a page fault occurring
(1-P) = the probability of accessing memory in an available frame
Paging fault time can be broken down also as
shown above.
EXAMPLE OF DEMAND PAGING
PERFORMANCE
</ Page Replacement Algorithms

There are three types of Page Replacement Algorithms. They are:


● Optimal Page Replacement Algorithm
● First In First Out Page Replacement Algorithm
● Least Recently Used (LRU) Page Replacement Algorithm

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</First In First Out Page Replacement Algorithm
This is the first basic algorithm of Page Replacement Algorithms. This algorithm is
basically dependent on the number of frames used. Then each frame takes up the certain
page and tries to access it. When the frames are filled then the actual problem starts. The
fixed number of frames is filled up with the help of first frames present. This concept is
fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If
the frame is present then, no problem is occurred. Because of the page which is to be
searched is already present in the allocated frames.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</First In First Out Page Replacement Algorithm
If the page to be searched is found among the frames then, this process is known as Page Hit.
If the page to be searched is not found among the frames then, this process is known as Page
Fault.
When Page Fault occurs this problem arises, then the First In First Out Page Replacement
Algorithm comes into picture.
The First In First Out (FIFO) Page Replacement Algorithm removes the Page in the frame
which is allotted long back. This means the useless page which is in the frame for a longer time
is removed and the new page which is in the ready queue and is ready to occupy the frame is
allowed by the First In First Out Page Replacement.
Let us understand this First In First Out Page Replacement Algorithm working with the help of
an example.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Example

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a


memory with three frames and calculate number of page faults by using FIFO (First In
First Out) Page replacement algorithms.
Points to Remember
Page Not Found - - - > Page Fault
Page Found - - - > Page Hit

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Reference String:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0 for a memory with three


frames and calculate number of page faults by using FIFO (First In First Out) Page replacement
algorithms.
Points to Remember
Page Not Found - - - > Page Fault
Page Found - - - > Page Hit

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Example
Number of Page Hits = 8
Number of Page Faults = 12

The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66
The Page Hit Percentage = 8 *100 / 20 = 40%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation:
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy. So, with the help of First in First Out Page
Replacement Algorithm we remove the frame which contains the page is older among the
pages. By removing the older page we give access for the new frame to occupy the empty
space created by the First in First out Page Replacement Algorithm.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ OPTIMAL Page Replacement Algorithm

This is the second basic algorithm of Page Replacement Algorithms. This


algorithm is basically dependent on the number of frames used. Then each frame
takes up the certain page and tries to access it. When the frames are filled then
the actual problem starts. The fixed number of frames is filled up with the help
of first frames present. This concept is fulfilled with the help of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter
the frame. If the frame is present then, no problem is occurred. Because of the
page which is to be searched is already present in the allocated frames .

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ OPTIMAL Page Replacement Algorithm
If the page to be searched is found among the frames then, this process is known as Page
Hit.
If the page to be searched is not found among the frames then, this process is known as
Page Fault.
When Page Fault occurs this problem arises, then the OPTIMAL Page Replacement
Algorithm comes into picture.
The OPTIMAL Page Replacement Algorithms works on a certain principle. The principle
is:
Replace the Page which is not used in the Longest Dimension of time in future
This principle means that after all the frames are filled then, see the future pages which are
to occupy the frames. Go on checking for the pages which are already available in the
frames. Choose the page which is at last.1

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Example:
Suppose the Reference String is:
0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0
6, 1, 2 are in the frames occupying the frames.
Now we need to enter 0 into the frame by removing one page from the page
So, let us check which page number occurs last

From the sub sequence 0, 3, 4, 6, 0, 2, 1 we can say that 1 is the last occurring
page number. So we can say that 0 can be placed in the frame body by removing
1 from the frame.
Let us understand this OPTIMAL Page Replacement Algorithm working with
the help of an example.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 4, 0


for a memory with three frames and calculate number of page faults by using
OPTIMAL Page replacement algorithms.
Points to Remember
Page Not Found - - - > Page Fault
Page Found - - - > Page Hit

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Reference String:

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


Number of Page Hits = 8
Number of Page Faults = 12
The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66
The Page Hit Percentage = 8 *100 / 20 = 40%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


Explanation

First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we have. The
problem occurs when there is no space for occupying of pages. We have already known that we
would replace the Page which is not used in the Longest Dimension of time in future.
There comes a question what if there is absence of page which is in the frame.
Suppose the Reference String is:
0, 2, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0
6, 1, 5 are in the frames occupying the frames.
Here, we can see that page number 5 is not present in the Reference String. But the number 5 is
present in the Frame. So, as the page number 5 is absent we remove it when required and other
page can occupy that position.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Least Recently Used (LRU) Algorithm
This is the last basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries to
access it. When the frames are filled then the actual problem starts. The fixed number of frames
is filled up with the help of first frames present. This concept is fulfilled with the help of
Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as Page Hit.
If the page to be searched is not found among the frames then, this process is known as Page
Fault.
When Page Fault occurs this problem arises, then the Least Recently Used (LRU) Page
Replacement Algorithm comes into picture.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Least Recently Used (LRU) Algorithm
The Least Recently Used (LRU) Page Replacement Algorithms works on a certain principle.
The principle is:
Replace the page with the page which is less dimension of time recently used page in the
past.
Example:
Suppose the Reference String is:
6, 1, 1, 2, 0, 3, 4, 6, 0
The pages with page numbers 6, 1, 2 are in the frames occupying the frames.
Now, we need to allot a space for the page numbered 0.
Now, we need to travel back into the past to check which page can be replaced.
6 is the oldest page which is available in the Frame.
So, replace 6 with the page numbered 0.
Let us understand this Least Recently Used (LRU) Page Replacement Algorithm working
with the help of an example.
1011 011 01 1011001 10 11011 011 01 110110 110111 1101
</ Example:

Consider the reference string 6, 1, 1, 2, 0, 3, 4, 6, 0, 2, 1, 2, 1, 2, 0, 3, 2, 1, 2, 0


for a memory with three frames and calculate number of page faults by using
Least Recently Used (LRU) Page replacement algorithms.
Points to Remember
Page Not Found - - - > Page Fault
Page Found - - - > Page Hit

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Reference String:

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


</ Example:
Number of Page Hits = 7
Number of Page Faults = 13
The Ratio of Page Hit to the Page Fault = 7 : 12 - - - > 0.5833 : 1
The Page Hit Percentage = 7 * 100 / 20 = 35%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 35 = 65%

Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to
create a space in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we have.
The problem occurs when there is no space for occupying of pages. We have already
known that we would replace the Page which is not used in the Longest Dimension of
time in past or can be said as the Page which is very far away in the past.

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


THRASHING

1011 011 01 1011001 10 11011 011 01 110110 110111 1101


Introduction
● Thrashing in OS is a phenomenon that occurs in operating
systems when a system spends a significant amount of
time paging rather than executing basic application
instructions.
● It is characterized by the constant swapping of data
between the main and virtual memory.
● Thrashing in OS occurs when the system is overwhelmed
by the number of processes that are simultaneously
running, and the system does not have enough physical
memory to accommodate all the processes.
● Thrashing in OS can also occur due to inefficient memory
allocation algorithms, leading to memory fragmentation.
Symptoms and How to Detect it?
● The symptoms of Thrashing in the OS include:
● High Disk Activity: When the system is Thrashing in OS,
the disk activity increases significantly as the system tries
to swap data between physical memory and virtual
memory.
● Slow Response Time: When the system is Thrashing in
OS, its response time slows significantly as the CPU
spends most of its time swapping data between physical
and virtual memory.
● High CPU Utilization: When the system is Thrashing in
OS, the CPU utilization increases significantly as it
spends most of its time swapping data between physical
and virtual memory.
Detection.
● To detect Thrashing in os, system administrators can use
system monitoring tools to monitor the system’s
performance metrics such as disk usage, response time,
and CPU utilization.
● If these metrics show a significant increase over a
prolonged period, it may indicate that the system is
Thrashing in OS
Causes:

● One of the main causes of it is overcommitting memory, which means that the
system allocates more memory than it has available
● Another cause is inadequate memory allocation, where the system has too little
memory to hold all the necessary data and programs
Effects
● reducing system performance
● slow down the system
● longer boot times
● slower application load times
● high disk usage
Prevention
● One technique is to increase the amount of memory available,
either by adding more physical memory or by using virtual
memory.
● Another technique is to reduce the number of processes running
concurrently, thus reducing the demand for memory.
● Load balancing can also help prevent Thrashing in OS by
distributing the workload evenly across multiple processors.
● Overall, the key to preventing Thrashing in OS is to ensure that
the system has enough memory, allocate memory efficiently, and
use effective memory management techniques to optimize
memory usage
CONCLUSION
● In operating systems, “Thrashing in OS” refers to a state where
the system spends excessive time swapping pages between main
memory and secondary storage, resulting in poor performance.
This can occur when the system has insufficient physical
memory to hold all the required pages in memory, and as a
result, the system spends too much time swapping pages in and
out of memory. To avoid Thrashing in the OS, the system can use
various techniques, such as increasing the amount of physical
memory available, adjusting the paging algorithm, or optimizing
the application’s memory usage. In conclusion,
● Thrashing in OS is a significant performance problem that can
occur in operating systems when the system’s physical memory
is insufficient to hold all the required pages in memory.
● To prevent Thrashing in the OS, it is essential to optimize
memory usage and consider increasing physical memory or
adjusting the paging algorithm.

You might also like