VM MM DAY2 RaghuGangolu
VM MM DAY2 RaghuGangolu
VIRTUAL MEMORY /
MEMORY MANAGEMENT
DAY – 2
01/12/2024
TOPICS
• Background
• Demand Paging
• Copy-on-Write
• Page Replacement (LRU,MFU)
• Allocation of Frames
• Thrashing
• Memory-Mapped Files
BACKGROUND
• Code needs to be in memory to execute, but entire program rarely used
• Error code, unusual routines, large data structures
• Less I/O needed to load or swap programs into memory -> each user program runs faster
BACKGROUND (CONT.)
• Virtual memory – separation of user logical memory from physical
memory
• Only part of the program needs to be in memory for execution
• Logical address space can therefore be much larger than physical address space
• Allows address spaces to be shared by several processes
• Allows for more efficient process creation
• More programs running concurrently
• Less I/O needed to load or swap processes
BACKGROUND (CONT.)
• Virtual address space – logical view of how process is stored in memory
• Usually start at address 0, contiguous addresses until end of space
• Meanwhile, physical memory organized in page frames
• MMU must map logical to physical
• If there is a reference to a page, first reference to that page will trap to operating system:
page fault
1. Operating system looks at another table to decide:
• Invalid reference abort
• Just not in memory
2. Find free frame
3. Swap page into frame via scheduled disk operation
4. Reset tables to indicate page now in memory
Set validation bit = v
5. Restart the instruction that caused the page fault
STEPS IN HANDLING A PAGE FAULT
ASPECTS OF DEMAND PAGING
• Extreme case – start process with no pages in memory
• OS sets instruction pointer to first instruction of process, non-memory-resident -> page fault
• And for every other process pages on first access
• Pure demand paging
• Actually, a given instruction could access multiple pages -> multiple page faults
• Consider fetch and decode of instruction which adds 2 numbers from memory and stores result back to
memory
• Pain decreased because of locality of reference
3. Bring the desired page into the (newly) free frame; update the page and frame tables
4. Continue the process by restarting the instruction that caused the trap
Note now potentially 2 page transfers for page fault – increasing EAT
PAGE REPLACEMENT
PAGE AND FRAME REPLACEMENT ALGORITHMS
• Page-replacement algorithm
• Want lowest page-fault rate on both first access and re-access
• Evaluate algorithm by running it on a particular string of memory references (reference string) and
computing the number of page faults on that string
• String is just page numbers, not full addresses
• Repeated access to the same page does not cause a page fault
• Results depend on number of frames available
• Stack implementation
• Keep a stack of page numbers in a double link form:
• Page referenced:
• move it to the top
• requires 6 pointers to be changed
• LRU and OPT are cases of stack algorithms that don’t have Belady’s Anomaly
USE OF A STACK TO RECORD MOST RECENT PAGE REFERENCES
LRU APPROXIMATION ALGORITHMS
• LRU needs special hardware and still slow
• Reference bit
• With each page associate a bit, initially = 0
• When page is referenced bit set to 1
• Replace any with reference bit = 0 (if one exists)
• We do not know the order, however
• Second-chance algorithm
• Generally FIFO, plus hardware-provided reference bit
• Clock replacement
• If page to be replaced has
• Reference bit = 0 -> replace it
• reference bit = 1 then:
• set reference bit 0, leave page in memory
• replace next page, subject to same rules
SECOND-CHANCE (CLOCK) PAGE-REPLACEMENT ALGORITHM
COUNTING ALGORITHMS
• Keep a counter of the number of references that have been made to each page
• Not common
• Lease Frequently Used (LFU) Algorithm: replaces page with smallest count
• Most Frequently Used (MFU) Algorithm: based on the argument that the
page with the smallest count was probably just brought in and has yet to be
used
MFU IMPLEMENTATION
MFU IMPLEMENTATION
MFU IMPLEMENTATION
MFU IMPLEMENTATION
MFU IMPLEMENTATION
MFU IMPLEMENTATION
MFU IMPLEMENTATION
GLOBAL VS. LOCAL ALLOCATION
• Global replacement – process selects a replacement frame from the set of
all frames; one process can take a frame from another
• But then process execution time can vary greatly
• But greater throughput so more common
• Local replacement – each process selects from only its own set of
allocated frames
• More consistent per-process performance
• But possibly underutilized memory
NON-UNIFORM MEMORY ACCESS
• So far all memory accessed equally
• Many systems are NUMA – speed of access to memory varies
• Consider system boards containing CPUs and memory, interconnected over a system bus
• Optimal performance comes from allocating memory “close to” the CPU on which the
thread is scheduled
• And modifying the scheduler to schedule the thread on the same system board when possible
• Solved by Solaris by creating lgroups
• Structure to track CPU / Memory low latency groups
• Used my schedule and pager
• When possible schedule all threads of a process and allocate all memory for that process within the
lgroup
THRASHING
• If a process does not have “enough” pages, the page-fault rate is very high
• Page fault to get page
• Replace existing frame
• But quickly need replaced frame back
• This leads to:
• Low CPU utilization
• Operating system thinking that it needs to increase the degree of multiprogramming
• Another process added to the system