Lecture 8 - Virtual Memory
Lecture 8 - Virtual Memory
2
Recap: Paging
Implementation
of Page Table
5
Outline
6
PART I: Basic Concepts
7
Limitations of Main Memory
8
A Solution for the Limitations of Main Memory
Source: https://en.wikipedia.org/wiki/Virtual_memory
10
Virtual Memory That is Larger Than Physical Memory
11
Virtual Address Space
● Usually design logical address space for stack to
start at Max logical address and grow “down”
while heap grows “up”
○ Maximizes address space use
○ Unused address space between the two is hole
● System libraries shared via mapping into virtual
address space
● Shared memory by mapping pages read-write into
virtual address space
○ Copy-on-Write (COW) (Page 399, Section 10.3)
12
Shared Library Using Virtual Memory
13
Demand Paging
● Idea: bring a page into memory
only when it is needed
○ Less I/O needed
○ Less memory needed
○ Faster response
○ More users
● Strategy: Page is needed ⇒
reference to it
○ invalid reference ⇒ abort
○ not-in-memory ⇒ memory
14
Demand Paging ~ A Paging System with Swapping
● Lazy swapper – never swaps a page
into memory unless page is needed
○ Swapper that deals with pages is a
pager
● How does a Pager work?
○ Case 1: Needed pages are already
memory resident
○ Case 2: Needed pages are not
memory resident
⇒ Need to detect and load the page into memory from storage
15
Valid-Invalid Bit
● With each page table entry a
valid–invalid bit is associated (v
in-memory – memory resident,
i not-in-memory)
● Initially valid–invalid bit is set
to i on all entries
● During MMU address
translation, if valid–invalid bit in
page table entry is i
⇒ page fault
16
6 Steps in Handling a Page Fault
17
Performance of Demand Paging
● Page Fault Rate (0 ≤ p ≤ 1)
○ if p = 0 no page faults
○ if p = 1, every reference is a fault
● Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in )
18
Performance of Demand Paging - An Example
19
PART II: Page Replacement
20
Page Replacement once No Free Frame
● Page replacement – find some
pages in memory, but not really in
use, page it out
○ Algorithm – terminate? swap
out? replace the page?
○ Performance – want an algorithm
which will result in minimum
number of page faults
● Same page may be brought into
memory several times
21
Need for Page Replacement
22
Basic Page Replacement
24
Graph of Page Faults Versus the Number of Frames
25
Page Replacement Algorithms
Page Replacement
Algorithms
26
First-in First-Out (FIFO) Algorithm
● Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
● 3 frames (3 pages can be in memory at a time per process)
● Idea: Older pages are replaced
15 page faults
28
Optimal Algorithm
● Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
● 3 frames (3 pages can be in memory at a time per process)
● Idea: Replace page that will not be used for longest period of time
9 page faults!!!
29
Least Recently Used (LRU) Algorithm
● Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 & 3 frames
● Use past knowledge rather than future
● Idea: Replace page that has not been used in the most amount of time
12 page faults!!!
31
LRU Approximation Algorithm - Reference Bit
● Simple Variant: Used 1 bit to represent the recent state
○ With each page associate a bit, initially = 0. when page is referenced bit set to 1
○ Replace any with reference bit = 0
■ However, do not know the order
● Additional Ref Bits Variant: Used 8 bits to represent the recent state
○ Every time a page is referenced, then shift the reference bits to the right by 1
○ The page with the lowest reference bits value is the one that is Least Recently
Used, thus to be replaced
○ E.g., Page A with 11000100 is more recently used than another with 01110111
● Problem: Complex implementation
32
LRU Approximation Algorithm - Second Chance
● Idea: Generally FIFO, plus hardware-provided reference bit
○ Clock replacement
○ If page to be replaced has
■ Reference bit = 0 ⟶ replace it
■ reference bit = 1 then:
● set reference bit 0, continue
● replace next page
● Problem: Slow with the large number of pages
● Improved variant: (reference, modify) pair
33
LRU Approximation Algorithm - Second Chance
34
CA Quiz - Week 8
● Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
● 4 frames (4 pages can be in memory at a time per process)
● Questions:
○ Q1. How many page faults will cause if FIFO is used?
○ Q2. How many page faults will cause if Optimal algorithm is used?
Please do explain your answer.
35
PART III: Frame Allocation & Thrashing
36
Allocation of Frames
● Each process needs minimum number of frames
○ E.g., IBM 370 – 6 pages to handle SS MOVE
instruction:
■ instruction is 6 bytes, might span 2 pages
■ 2 pages to handle from
■ 2 pages to handle to
● Maximum of course is total frames in the system
● Two major allocation schemes
○ fixed allocation
○ priority allocation
37
Fixed Allocation
● Equal allocation: Keep some as free frame buffer pool
○ E.g., if there are 100 frames (after allocating frames for the OS) and
5 processes, give each process 20 frames
● Proportional allocation: Allocate according to the size of process
○ Dynamic as degree of multiprogramming, process sizes change
38
Priority Allocation
● Idea: Frame allocation is based on the process's priority, not size
○ Background vs Foreground Process
○ Interactive Processes
○ Time in system
39
Global versus Local Allocation
Global Allocation Local Allocation
Idea: process selects a replacement Idea: each process selects from only
frame from the set of all frames; one its own set of allocated frames
process can take a frame from
another
Advantage: greater throughput Advantage: consistent per-process
performance
Problem: process execution time can Problem: possibly underutilized
vary greatly memory
40
Thrashing
● Thrashing: A process is busy swapping pages in and out
○ It does not have "enough" pages, the page-fault rate is
very high
● Problems:
○ Low CPU utilization
○ OS thinks that it needs to
increase the degree
of multiprogramming
41
Solutions for Thrashing
42
Working-Set Model
Δ ≡ working-set window ≡ a fixed number of page references
WSSi (working set of Process Pi) = total number of pages referenced in Δ
D = ∑ WSSi ≡ total demand frames
1. if D > m → Thrashing
2. Policy if D > m, then suspend or swap out one of the processes
43
Summary
● Part I: Basic Concepts
○ Virtual Memory
○ Valid-Invalid Bit
○ Demand Paging
● Part II: Page Replacement Algorithms
○ FIFO, Optimal, LRU, LRU Approximation
● Part III: Frame Allocation & Thrashing
○ Fixed- and Priority Allocation
○ Thrashing
44
THANK YOU!
45