0% found this document useful (0 votes)
9 views

Lecture 8 - Virtual Memory

Virtual Memory in os

Uploaded by

azanetranclc17
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Lecture 8 - Virtual Memory

Virtual Memory in os

Uploaded by

azanetranclc17
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

COMP 2040 - Operating Systems

Lecture 8 - Virtual Memory


Duc-Trong Le,
Adjunct Faculty@College of Engineering & Computer Science
[email protected] | November 23, 2023
Recap: Binding of Instructions and Data to Memory
Address binding of instructions and data to memory
addresses at three different stages:
● Compile time: If memory location known a priori, Compile time
absolute code can be generated; must recompile code
if starting location changes.
● Load time: Must generate relocatable code if
memory location is not known at compile time.
● Execution time: Binding delayed until run time if the
Load time
process can be moved during its execution from one
memory segment to another.
○ Need hardware support for address maps (e.g.,
base and limit registers) Execution time

2
Recap: Paging

Before Allocation After Allocation


3
Recap: Page Table

● Address generated by CPU is divided into:


○ Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
○ Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit
4
Recap: Implementation of Page Table
Motivating Problem: OS 32-bit with 4KB page size, page table would have 1 million
entries (232 / 212) => slow seeking time

Implementation
of Page Table

Hierarchical PT Hashed PT Inverted PT

5
Outline

● Part I: Basic Concepts

● Part II: Page Replacement

● Part III: Frame Allocation & Thrashing

6
PART I: Basic Concepts

7
Limitations of Main Memory

A program (code) must be in main memory to


execute, there might have some problems:
● Entire program is rarely used → Throughput
○ Error code, unusual routines
● Data cannot be fitted into main memory

8
A Solution for the Limitations of Main Memory

Idea: Partially-loaded program


● No longer constrained by limits of memory size
→ Could load a 'big' program
● Each program takes less memory while running
→ Increase CPU utilization, throughput
● Less I/O needed to load or swap programs
→ Run faster
⇒ Virtual Memory
9
Virtual Memory

Virtual memory – separation of user logical memory


from physical memory
● Benefits of partially-loaded program
● Logical address space can be much larger than
physical address space
● Allows address spaces to be shared by processes
● Allows for more efficient process creation

Source: https://en.wikipedia.org/wiki/Virtual_memory

10
Virtual Memory That is Larger Than Physical Memory

● Virtual address space –


logical view of how process
is stored in memory
○ MMU must map logical
to physical
● Virtual memory can be
implemented via:
○ Demain paging
○ Demain segmentation

11
Virtual Address Space
● Usually design logical address space for stack to
start at Max logical address and grow “down”
while heap grows “up”
○ Maximizes address space use
○ Unused address space between the two is hole
● System libraries shared via mapping into virtual
address space
● Shared memory by mapping pages read-write into
virtual address space
○ Copy-on-Write (COW) (Page 399, Section 10.3)
12
Shared Library Using Virtual Memory

13
Demand Paging
● Idea: bring a page into memory
only when it is needed
○ Less I/O needed
○ Less memory needed
○ Faster response
○ More users
● Strategy: Page is needed ⇒
reference to it
○ invalid reference ⇒ abort
○ not-in-memory ⇒ memory
14
Demand Paging ~ A Paging System with Swapping
● Lazy swapper – never swaps a page
into memory unless page is needed
○ Swapper that deals with pages is a
pager
● How does a Pager work?
○ Case 1: Needed pages are already
memory resident
○ Case 2: Needed pages are not
memory resident
⇒ Need to detect and load the page into memory from storage
15
Valid-Invalid Bit
● With each page table entry a
valid–invalid bit is associated (v
in-memory – memory resident,
i not-in-memory)
● Initially valid–invalid bit is set
to i on all entries
● During MMU address
translation, if valid–invalid bit in
page table entry is i
⇒ page fault
16
6 Steps in Handling a Page Fault

● Two scenarios when


allocating free frame:
○ Available ⇒ Allocate
○ Unavailable ⇒ Page
Replacement (Part II)
● OSs maintain a
free-frame list.

17
Performance of Demand Paging
● Page Fault Rate (0 ≤ p ≤ 1)
○ if p = 0 no page faults
○ if p = 1, every reference is a fault
● Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in )

18
Performance of Demand Paging - An Example

● Memory access time = 200 nanoseconds


● Average page-fault service time = 8 milliseconds
● EAT = (1 – p) x 200 + p (8 milliseconds)
= (1 – p ) x 200 + p x 8,000,000
= 200 + p x 7,999,800
● If one access out of 1,000 causes a page fault, i.e., p=0.001, then
EAT = 8.2 microseconds.

19
PART II: Page Replacement

20
Page Replacement once No Free Frame
● Page replacement – find some
pages in memory, but not really in
use, page it out
○ Algorithm – terminate? swap
out? replace the page?
○ Performance – want an algorithm
which will result in minimum
number of page faults
● Same page may be brought into
memory several times
21
Need for Page Replacement

22
Basic Page Replacement

Page & Frame


Replacement Algorithms
23
Page and Frame Replacement Algorithms
● Frame-allocation algorithm determines
○ How many frames to give each process
○ Which frames to replace
● Page-replacement algorithm wants
○ The lowest page-fault rate on both first access and re-access
● Algorithm Evaluation: run it on a particular string of memory
references and computing the number of page faults on that reference
○ E.g., the reference string of referenced page numbers
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1

24
Graph of Page Faults Versus the Number of Frames

Size of Physical Memory

25
Page Replacement Algorithms

Page Replacement
Algorithms

FIFO Optimal LRU LRU Approximation

26
First-in First-Out (FIFO) Algorithm
● Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
● 3 frames (3 pages can be in memory at a time per process)
● Idea: Older pages are replaced

15 page faults

● Problem: Add more frames can causes more page faults?


○ Belady's Anomaly
27
FIFO Illustrating Belady’s Anomaly

28
Optimal Algorithm
● Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
● 3 frames (3 pages can be in memory at a time per process)
● Idea: Replace page that will not be used for longest period of time

9 page faults!!!

● Problem: Can't read the future!!!

29
Least Recently Used (LRU) Algorithm
● Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 & 3 frames
● Use past knowledge rather than future
● Idea: Replace page that has not been used in the most amount of time

12 page faults!!!

● Good algorithm and frequently used, but need hardware supports


● Implementation using Counter or Stack
30
LRU Stack Implementation

31
LRU Approximation Algorithm - Reference Bit
● Simple Variant: Used 1 bit to represent the recent state
○ With each page associate a bit, initially = 0. when page is referenced bit set to 1
○ Replace any with reference bit = 0
■ However, do not know the order
● Additional Ref Bits Variant: Used 8 bits to represent the recent state
○ Every time a page is referenced, then shift the reference bits to the right by 1
○ The page with the lowest reference bits value is the one that is Least Recently
Used, thus to be replaced
○ E.g., Page A with 11000100 is more recently used than another with 01110111
● Problem: Complex implementation

32
LRU Approximation Algorithm - Second Chance
● Idea: Generally FIFO, plus hardware-provided reference bit
○ Clock replacement
○ If page to be replaced has
■ Reference bit = 0 ⟶ replace it
■ reference bit = 1 then:
● set reference bit 0, continue
● replace next page
● Problem: Slow with the large number of pages
● Improved variant: (reference, modify) pair

33
LRU Approximation Algorithm - Second Chance

34
CA Quiz - Week 8
● Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
● 4 frames (4 pages can be in memory at a time per process)
● Questions:
○ Q1. How many page faults will cause if FIFO is used?
○ Q2. How many page faults will cause if Optimal algorithm is used?
Please do explain your answer.

35
PART III: Frame Allocation & Thrashing

36
Allocation of Frames
● Each process needs minimum number of frames
○ E.g., IBM 370 – 6 pages to handle SS MOVE
instruction:
■ instruction is 6 bytes, might span 2 pages
■ 2 pages to handle from
■ 2 pages to handle to
● Maximum of course is total frames in the system
● Two major allocation schemes
○ fixed allocation
○ priority allocation

37
Fixed Allocation
● Equal allocation: Keep some as free frame buffer pool
○ E.g., if there are 100 frames (after allocating frames for the OS) and
5 processes, give each process 20 frames
● Proportional allocation: Allocate according to the size of process
○ Dynamic as degree of multiprogramming, process sizes change

38
Priority Allocation
● Idea: Frame allocation is based on the process's priority, not size
○ Background vs Foreground Process
○ Interactive Processes
○ Time in system

39
Global versus Local Allocation
Global Allocation Local Allocation
Idea: process selects a replacement Idea: each process selects from only
frame from the set of all frames; one its own set of allocated frames
process can take a frame from
another
Advantage: greater throughput Advantage: consistent per-process
performance
Problem: process execution time can Problem: possibly underutilized
vary greatly memory

40
Thrashing
● Thrashing: A process is busy swapping pages in and out
○ It does not have "enough" pages, the page-fault rate is
very high
● Problems:
○ Low CPU utilization
○ OS thinks that it needs to
increase the degree
of multiprogramming

41
Solutions for Thrashing

● Page Replacement Algorithms:


○ Local Replacement
○ Priority Replacement
● Working-Set Model:
○ Approximate the locality
○ Once thrashing happens, suspend
or swap out one of the process

42
Working-Set Model
Δ ≡ working-set window ≡ a fixed number of page references
WSSi (working set of Process Pi) = total number of pages referenced in Δ
D = ∑ WSSi ≡ total demand frames

1. if D > m → Thrashing
2. Policy if D > m, then suspend or swap out one of the processes

43
Summary
● Part I: Basic Concepts
○ Virtual Memory
○ Valid-Invalid Bit
○ Demand Paging
● Part II: Page Replacement Algorithms
○ FIFO, Optimal, LRU, LRU Approximation
● Part III: Frame Allocation & Thrashing
○ Fixed- and Priority Allocation
○ Thrashing

44
THANK YOU!

45

You might also like