Virtual Memory MGMT - Module 4
Virtual Memory MGMT - Module 4
● Background
● Demand Paging
● Copy-on-Write
● Page Replacement
● Allocation of Frames
● Thrashing
Objectives
With swapping, pager guesses which pages will be used before swapping
out again
• Instead, pager brings in only those pages into memory
• How to determine that set of pages?
– Need new MMU functionality to implement demand paging
• If pages needed are already memory resident
– No difference from non demand-paging
• If page needed and not memory resident
– Need to detect and load the page into memory from storage
• Without changing program behavior
• Without programmer needing to change code
Valid-Invalid Bit
4. Check that the page reference was legal and determine the location of the page on the disk
5.1 Wait in a queue for this device until the read request is serviced
7. Receive an interrupt from the disk I/O subsystem (I/O completed) 8. Save the registers and process state
for the other user
10. Correct the page table and other tables to show page is now in memory
12. Restore the user registers, process state, and new page table, and then resume the interrupted instruction
Performance of Demand Paging
In the graph we see that the number of page fault for four frames is more than
for three frames.This is an unexpected result known as Belady’s Anomaly.
Optimal Algorithm
• Replace page that will not be used for longest period of time
– 9 is optimal for the example
How do you know this? – Can’t read the future
• Used for measuring how well your algorithm performs
Least Recently Used (LRU) Algorithm
Use past knowledge rather than future
• Replace page that has not been used for the longest period of time
• Associate time of last use with each page
• Page faults=12
Contd..
• Improve algorithm by using reference bit and modify bit (if available) in
concert
• Take ordered pair (reference, modify)
1. (0, 0) neither recently used not modified – best page to replace
2. (0, 1) not recently used but modified – not quite as good, must write out before
replacement
3. (1, 0) recently used but clean – probably will be used again soon
4. (1, 1) recently used and modified – probably will be used again soon and need
to write out before replacement
• When page replacement called for, use the clock scheme but use the four classes
replace page in lowest non-empty class
– Might need to search circular queue several times
Counting Algorithms
• Keep a counter of the number of references that have been made to each page
– Not common
• Lease Frequently Used (LFU) Algorithm: replaces page with smallest count
• Most Frequently Used (MFU) Algorithm: based on the argument that the page
with the smallest count was probably just brought in and has yet to be used
Page-Buffering Algorithms
• Local replacement
– each process selects from only its own set of allocated frames
– More consistent per-process performance
– But possibly underutilized memory
Non-Uniform Memory Access
• So far all memory accessed equally
• Many systems are NUMA – speed of access to memory varies
– Consider system boards containing CPUs and memory, interconnected over a
system bus
• Optimal performance comes from allocating memory “close to” the CPU on
which the thread is scheduled
– And modifying the scheduler to schedule the thread on the same system board
when possible
– Solved by Solaris by creating lgroups
• Structure to track CPU / Memory low latency groups
• Used my schedule and pager
• When possible schedule all threads of a process and allocate all memory for that
process within the lgroup
Thrashing
• If a process does not have “enough” pages, the page-fault rate is very high
– Page fault to get page
– Replace existing frame
– But quickly need replaced frame back
– This leads to:
● Low CPU utilization
● Operating system thinking that it needs to increase the degree of
multiprogramming
● Another process added to the system
● A process is thrashing if it is spending more time paging than executing.
● This high paging activity is called thrashing
Cause of Thrashing
We can limit the effects of thrashing by using a local replacement algorithm (or
priority replacement algorithm).
• To prevent thrashing, we must provide a process with as many frames as it
needs
Cause of Thrashing (contd)
• localities are defined by the program structure and its data structures.
• If accesses to any types of data were random rather than patterned,
caching would be useless
• Why does thrashing occur?
size of locality > total memory size
– Limit effects by using local or priority page replacement
Locality In A Memory-Reference Pattern
Working-Set Model
● Direct relationship between working set of a process and its page-fault rate
Working set changes over time
● Peaks and valleys over time
● A peak in the page fault rate occurs when we begin demand paging a new
locality
● When the process moves to the new working set, the page fault rate rises
towards the peak once again.