Memory management in an operating system deals with the efficient allocation, tracking, and freeing of memory for processes. It covers techniques like paging, segmentation, and virtual memory to optimize performance and prevent fragmentation. The OS must balance speed, resource utilization, and protection while ensuring each process runs in isolation without interference.
1. What is memory management in an Operating System? What is the difference between logical and physical address?
Memory management is the process of handling computer memory efficiently by allocating, tracking, and freeing memory for programs and processes.
- Logical address: Generated by the CPU (virtual address).
- Physical address: Actual location in main memory (RAM).
- A Memory Management Unit (MMU) maps logical to physical addresses.
2. What is the difference between logical and physical address?
- Logical address: Generated by the CPU (virtual address).
- Physical address: Actual location in main memory (RAM).
- A Memory Management Unit (MMU) maps logical to physical addresses.
2. In a demand paging system, explain how “Belady’s Anomaly” can occur and why some algorithms are immune to it.
Belady’s Anomaly in Demand Paging:
- In a demand paging system, pages are loaded into memory only when needed.
- Normally, you’d expect that increasing the number of allocated frames to a process reduces the number of page faults (since more pages can be kept in memory).
- Belady’s Anomaly is the paradoxical situation where adding more frames actually increases the number of page faults.
Why It Happens (Example with FIFO):
- FIFO (First-In-First-Out) replacement evicts the page that entered memory earliest, regardless of whether it is still needed.
- This can cause a heavily used page to be evicted just because it was loaded early.
Example Reference String: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
- With 3 frames (FIFO) -> 9 page faults.
- With 4 frames (FIFO) -> 10 page faults.
- More frames, more page faults = Belady’s Anomaly.
Why Some Algorithms Are Immune: Algorithms like LRU (Least Recently Used) and Optimal Replacement are immune because they are stack algorithms. Stack Property:
- A stack algorithm ensures that the set of pages in memory with n frames is always a subset of the set of pages in memory with n+1 frames.
- Thus, giving a process more frames cannot force eviction of a page it already had.
- This eliminates the possibility of anomaly.
LRU -> Always removes the least recently used page (closer to optimal).
Optimal Replacement -> Removes the page that will not be used for the longest future time (theoretical best).
Both follow the principle of optimality, so no anomaly occurs.
3. Explain the difference between internal and external fragmentation with examples, and how paging/segmentation addresses each.
- Internal Fragmentation: Unused memory inside allocated blocks (e.g., allocating 8 KB block when process needs 6 KB wastes 2 KB). Paging causes this when the last page of a process is not fully used.
- External Fragmentation: Free memory exists but is scattered, preventing allocation of large contiguous blocks (common in variable partitioning). Segmentation may suffer from this.
Solutions:
Paging reduces external fragmentation but not internal.
Compaction can reduce external fragmentation but is costly.
4. Explain the concept of Copy-on-Write (COW) and its advantages.
Copy-on-Write allows multiple processes to share the same memory page until one modifies it, at which point the OS makes a copy for the modifying process.
Advantages:
- Saves memory during process creation (e.g.,
fork()
in UNIX). - Improves performance by avoiding unnecessary copying.
Use Case: OS process creation, virtual memory optimization.
The TLB is a high-speed cache storing recent page table entries to speed up virtual-to-physical address translation.
Working:
- On memory access, OS first checks the TLB.
- If found (TLB hit), translation is immediate.
- If not found (TLB miss), page table lookup in main memory occurs, then TLB is updated.
Performance Impact: Reduces address translation time drastically. Hit ratios of 95%+ are common in modern systems.
6. Compare segmentation with paging in terms of protection, sharing, and memory efficiency.
Both segmentation and paging are memory management schemes in virtual memory systems, but they differ in how they divide memory and in their handling of protection, sharing, and efficiency.
Feature | Segmentation | Paging |
---|
Unit of division | Logical units (code, data, stack) | Fixed-size pages |
Protection | Per segment (logical boundaries) | Per page (uniform) |
Sharing | Easy, share segments across processes | Possible via page table mapping |
Fragmentation | External fragmentation | Internal fragmentation |
Efficiency | Matches program structure | Simplifies allocation & avoids external fragmentation |
7. How does the OS handle a “page fault” from the moment it occurs to process resumption?
Steps:
- CPU detects page not present in memory -> triggers page fault trap to OS.
- OS checks if reference is valid (legal address). If invalid -> segmentation fault.
- If valid, find location on disk.
- Choose a free frame (or run page replacement if none free).
- Read page from disk into frame.
- Update page table and possibly TLB.
- Restart the instruction that caused the fault.
8. Why might an OS use both paging and segmentation together?
Combining them leverages the benefits of both:
- Segmentation handles logical division (code, data, stack) for protection and sharing.
- Paging removes external fragmentation and simplifies memory allocation within segments.
Example: Intel x86 architecture uses segmentation at the top level, then pages each segment internally.
9. How can the OS dynamically adjust the degree of multiprogramming based on memory pressure?
Dynamic Adjustment of Degree of Multiprogramming (DoM):
- The degree of multiprogramming (DoM) = number of processes kept in memory simultaneously.
- Too high DoM -> memory contention -> frequent page faults -> possible thrashing.
- Too low DoM -> underutilization of CPU.
- Hence, the OS must dynamically adjust DoM based on memory pressure indicators.
How the OS Monitors Memory Pressure:
- Page Fault Rate Monitoring: High page fault rate -> processes don’t have enough frames. Low page fault rate -> memory is under-utilized.
- Working Set Size Monitoring: If the sum of all working sets > total frames available, system is overcommitted -> thrashing risk. If total working sets < available frames, more processes can be admitted.
Techniques for Dynamic Adjustment:
- Page Fault Frequency (PFF) Control: Each process is monitored for page fault frequency.
If PFF is above a threshold -> OS allocates more frames or suspends the process if memory is insufficient.
If PFF is below a threshold -> OS may reclaim some frames and allocate them to other processes.
- Working Set Model (Denning’s Model): OS maintains an estimate of each process’s working set (pages used in last Δ time). Ensures each process has at least its working set in memory. If total demand > available memory -> OS reduces DoM by suspending/resuming processes & Guarantees stability by preventing thrashing.
10. Explain “memory-mapped files” and their advantages over traditional file I/O.
Memory-mapped files map a file’s contents directly into the process’s address space, allowing file data to be accessed via memory loads/stores instead of read/write syscalls.
Advantages:
- Faster I/O by avoiding extra buffer copies.
- OS can use demand paging to load portions as needed.
- Enables multiple processes to share file data in memory.
11. Explain the concept of “Belady’s Anomaly” with an example. Why does it occur in FIFO page replacement?
Belady’s Anomaly is a counterintuitive situation where increasing the number of page frames in memory results in more page faults, not fewer.
Cause: Occurs in FIFO or FIFO -like page replacement policies because the algorithm doesn’t consider page usage frequency; it may remove a frequently used page just because it is the oldest.
Example: Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
With 3 frames: 9 page faults
With 4 frames: 10 page faults
Prevention: Use algorithms like LRU or Optimal Replacement that avoid this anomaly.
12. How does the OS handle memory protection and isolation in a multi-process environment?
Memory protection ensures that one process cannot access another’s memory space, maintaining security and stability. Mechanisms:
- Base and Limit Registers: Define the valid memory address range for a process.
- Paging: Page tables store protection bits (read/write/execute).
- Segmentation: Stores access rights per segment.
OS Role: Performs checks during address translation and triggers a protection fault (segmentation fault) if a violation occurs.
13. Compare Internal Fragmentation and External Fragmentation with scenarios. How can each be minimized?
Internal Fragmentation: Memory waste inside allocated blocks (e.g., allocating 1 KB when process needs only 700B).
- Cause: Fixed block sizes.
- Solution: Use paging or smaller allocation units.
External Fragmentation: Free memory scattered into non-contiguous blocks, making it unusable despite enough total space.
- Cause: Variable-sized allocations.
- Solution: Use compaction or paging/segmentation.
The TLB caches recent virtual-to-physical address translations, reducing the need for frequent page table lookups, which is a overhead as page table is present in main memory. A high TLB hit ratio greatly improves effective memory access time.
Trade-offs:
- Larger TLB -> more entries but slower access time.
- Smaller TLB -> faster lookup but more misses.
- Associativity affects complexity and performance.
15. What is Demand Paging, and how does it differ from pure paging?
Demand paging loads a page into memory only when it is accessed, rather than preloading all pages.
Advantages: Reduces initial load time and memory usage.
Steps:
- Process starts with minimal pages.
- Page fault occurs on missing page.
- OS loads page from disk into memory.
Difference: Pure paging loads all pages upfront, which is slower and wastes memory if pages aren’t used.
16. Describe how “Copy-on-Write” works in virtual memory systems and why it’s important for process creation.
Copy-on-Write is an optimization technique in virtual memory systems used when processes share memory pages. Instead of immediately copying all pages during process creation (e.g., fork() in Unix/Linux), the OS delays copying until one of the processes modifies the page.
How COW Works (Step-by-Step):
- Fork (Process Creation): When a process calls
fork()
, the child process initially shares the parent’s memory pages. These shared pages are marked read-only in the page table of both processes. - Read Access: Both parent and child can read the same physical pages without issue.
- Write Access (Triggering COW): If either process tries to write to a shared page -> page fault occurs. The OS then:
Allocates a new physical frame.
Copies the original page’s contents into the new frame.
Updates the page table of the writing process to point to the private copy.
The other process continues using the old page.
Importance of COW:
- Memory Efficiency: Without COW,
fork()
would duplicate the entire memory of the parent -> huge overhead. With COW, memory is only copied when necessary, saving RAM. - Performance Boost: Many child processes (like in
fork() + exec()
model) immediately replace their memory with a new program via exec()
. With COW, no time is wasted copying unused memory. - Process Isolation & Safety: Ensures that modifications by one process don’t affect the other, maintaining process independence.
Swapping temporarily moves inactive processes from main memory to disk to free up space for active ones.
Steps:
- Identify process to swap out (low priority/inactive).
- Save process state and memory to swap space on disk.
- Load swapped process back when needed.
Performance Impact: Increases CPU availability but can be slow due to disk I/O overhead (swap thrashing if excessive).
Concept of Memory-Mapped Files:
- A memory-mapped file (mmap) is a mechanism that allows a file or a portion of it to be mapped directly into a process’s virtual address space.
- Once mapped, the application can access the file using simple memory operations (load/store) instead of explicit read() or write() system calls.
- This technique is widely supported in Unix/Linux (mmap) and Windows (MapViewOfFile).
How Memory-Mapped Files Work:
- The OS maps file blocks into the process’s virtual memory pages.
- When the process accesses an address inside the mapped region:
If the corresponding page is not yet in RAM -> page fault occurs.
The OS loads the needed page from disk into memory (on-demand paging).
- Writing to the mapped region marks the page dirty -> OS writes back to the file later (using page replacement/write-back policies). Effectively, file I/O is treated like normal memory access.
How It Improves I/O Performance:
- Eliminates Read/Write Syscall Overhead: Traditional I/O: user process -> syscall -> kernel buffer -> copy -> user buffer. Memory-mapped I/O: direct access via memory addresses (no extra data copying).
- On-Demand Loading (Paging): Only the required portions of the file are loaded into memory, reducing unnecessary I/O.
- Efficient File Sharing: Multiple processes can map the same file into their address spaces and share data through memory, without extra copies.
- Lazy Write-Back: Modifications to mapped memory are buffered and written back efficiently by the OS, reducing disk writes.
19. Explain the difference between Inverted Page Table and Conventional Page Table.
In conventional Page Table, Each process has a page table with one entry per virtual page -> large memory usage for large address spaces. While, In inverted Page Table: Single global table with one entry per physical frame -> smaller size.
Feature | Conventional Page Table | Inverted Page Table |
---|
Structure | One table per process | One global table for the system |
---|
Entries | One entry per virtual page | One entry per physical frame |
---|
Memory Usage | Very large for big virtual address spaces | Much smaller, depends only on physical memory |
---|
Lookup Speed | Fast, direct indexing | Slower, requires search (hashing used) |
---|
Process Isolation | Each process has its own page table | Uses (PID, VPN) to distinguish processes |
---|
Overhead | High memory overhead | Higher computational overhead |
---|
Trade-off: Inverted tables save memory but require more complex lookups, often needing hashing.
20. What is Thrashing in OS memory management, and how can it be prevented?
Thrashing is a condition in a virtual memory system where the CPU spends most of its time swapping pages in and out of memory(handling page faults) instead of executing instructions. As a result -> system performance drops drastically.
Cause:
- Degree of Multiprogramming is too high -> too many processes compete for limited physical memory.
- Working Set > Available Frames -> A process does not have enough frames to hold the pages it actively needs.
- Poor Page Replacement Policy (e.g., FIFO without considering page usage) -> causes frequent eviction of needed pages.
Symptoms of Thrashing:
- Very high page fault rate.
- CPU utilization drops (CPU is idle while waiting for disk I/O).
- System throughput decreases (less actual work done).
Prevention:
- Working Set Model: Ensure each process gets at least enough frames to cover its working set.
- Page Fault Frequency (PFF) Control: Monitor page fault rate and adjust frames or suspend/resume processes.
- Local Replacement: Restrict page replacement within the same process to avoid one process causing another to thrash.
- Reducing Degree of Multiprogramming: Temporarily suspend low-priority processes to free frames.
Explore
OS Basics
Process Management
Memory Management
I/O Management
Important Links