oslecture10-13
oslecture10-13
Operating Systems
Lectures 10,11,12 and13 - Memory Management
Prof. Nalini Venkatasubramanian
[email protected]
Outline
Background
Logical versus Physical Address Space
Swapping
Contiguous Allocation
Paging
Segmentation
Segmentation with Paging
Background
Delayed binding
Linker, loader
produces efficient code, allows separate compilation
portability and sharing of object code
Late binding
VM, dynamic linking/loading, overlaying, interpreting
code less efficient, checks done at runtime
flexible, allows dynamic reconfiguration
Dynamic Loading
CPU
Logical Physical
Base
address address register
(ma) (pa)
pa = ba + ma
Fixed partitions
Contiguous Allocation (cont.)
OS OS OS OS
Process 5 Process 5 Process 5 Process 5
Process 9 Process 9
Process 8 Process 10
CPU p d f d
Physical
Memory
:
p
f
:
Example of Paging
Page 0
Page 0
Page 1 Page 2
1
Page 1
Page 2 0
1 3
Page 3 2 4
Page 3
3 7
: :
Page Table Implementation
Page table is kept in main memory
Page-table base register (PTBR) points to the page table.
Page-table length register (PTLR) indicates the size of page
table.
Every data/instruction access requires 2 memory accesses.
One for page table, one for data/instruction
Two-memory access problem solved by use of special fast-
lookup hardware cache (i.e. cache page table in registers)
associative registers or translation look-aside buffers (TLBs)
Associative Registers
Page # Frame #
Address Translation
(A, A’)
100
: :
708
:
:
929
: :
900
Two Level Paging Example
A logical address (32bit machine, 4K page size) is
divided into
a page number consisting of 20 bits, a page offset
consisting of 12 bits
Since the page table is paged, the page number
consists of
a 10-bit page number, a 10-bit page offset
Thus, a logical address is organized as (p1,p2,d) where
p1 is an index into the outer page table
p2 is the displacement within the page of the outer page
table Page number Page offset
p1 p2 d
Multilevel paging
1
2
1
4
2 4
3
Limit Base
editor 0 25286 43602
1 4425 68348 43062
segment 0
data 1 Segment Table editor
68348
process P1
data 1
segment 1 72773
Logical Memory
process P1
44
Virtual Memory
Background
Demand paging
Performance of demand paging
Page Replacement
Page Replacement Algorithms
Allocation of Frames
Thrashing
Demand Segmentation
Need for Virtual Memory
Virtual Memory
Separation of user logical memory from physical
memory.
Only PART of the program needs to be in memory for
execution.
Logical address space can therefore be much larger
than physical address space.
Need to allow pages to be swapped in and out.
Virtual Memory can be implemented via
Paging
Segmentation
Paging/Segmentation
Policies
Fetch Strategies
When should a page or segment be brought into primary
memory from secondary (disk) storage?
Demand Fetch
Anticipatory Fetch
Placement Strategies
When a page or segment is brought into memory, where
is it to be put?
Paging - trivial
Segmentation - significant problem
Replacement Strategies
Which page/segment should be replaced if there is not
enough room for a required page/segment?
Demand Paging
Bring a page into memory only when it is
needed.
Less I/O needed
Less Memory needed
Faster response
More users
The first reference to a page will trap to OS
with a page fault.
OS looks at another table to decide
Invalid reference - abort
Just not in memory.
Valid-Invalid Bit
With each page table entry a valid-invalid bit is
associated (1 in-memory, 0 not in memory).
Initially, valid-invalid bit is set to 0 on all entries.
During address translation, if valid-invalid bit in page
table entry is 0 --- page fault occurs.
Example of a page-table snapshot
Frame # Valid-invalid bit
1
1
1
1
0
Page Table
:
0
0
0
Handling a Page Fault
Page is needed - reference to page
Step 1: Page fault occurs - trap to OS (process suspends).
Step 2: Check if the virtual memory address is valid. Kill
job if invalid reference. If valid reference, and page not in
memory, continue.
Step 3: Bring into memory - Find a free page frame, map
address to disk block and fetch disk block into page frame.
When disk read has completed, add virtual memory
mapping to indicate that page is in memory.
Step 4: Restart instruction interrupted by illegal address
trap. The process will continue as if page had always been
in memory.
What happens if there is no
free frame?
Page replacement - find some page in
memory that is not really in use and swap it.
Need page replacement algorithm
Performance Issue - need an algorithm which will result
in minimum number of page faults.
Same page may be brought into memory many
times.
Performance of Demand
Paging
Page Fault Ratio - 0 p 1.0
If p = 0, no page faults
If p = 1, every reference is a page fault
Effective Access Time
EAT = (1-p) * memory-access +
p * (page fault overhead +
swap page out +
swap page in +
restart overhead)
Demand Paging Example
Frame 1 1 5 4
Frame 2 2 1 5 10 Page faults
4 frames Frame 3 3 2
Frame 4 4 3
FIFO Replacement - Belady’s Anomaly -- more frames does not mean less page faults
Optimal Algorithm
Page Protection
Segmentation Protection
Equal Allocation
E.g. If 100 frames and 5 processes, give each 20 pages.
Proportional Allocation
Allocate according to the size of process
Sj = size of process Pj
S = Sj
m = total number of frames
aj = allocation for Pj = Sj/S * m
If m = 64, S1 = 10, S2 = 127 then
a1 = 10/137 * 64 5
a2 = 127/137 * 64 59
Priority Allocation
72
Thrashing
Why does thrashing occur?
(size of locality) total memory size
Working Set Model
working-set window
a fixed number of page references, e.g. 10,000 instructions
WSSj (working set size of process Pj) - total number of
pages referenced in the most recent (varies in time)
If too small, will not encompass entire locality.
If too large, will encompass several localities.
If = , will encompass entire program.
D = WSSj total demand frames
If D m (number of available frames) thrashing
Policy: If D m, then suspend one of the processes.
Keeping Track of the
Working Set
Approximate with
interval timer + a reference bit
Example: = 10,000
Timer interrupts after every 5000 time units.
Whenever a timer interrupts, copy and set the values of all
reference bits to 0.
Keep in memory 2 bits for each page (indicated if page was used
within last 10,000 to 15,000 references).
If one of the bits in memory = 1 page in working set.
Not completely accurate - cannot tell where reference
occurred.
Improvement - 10 bits and interrupt every 1000 time units.
Page fault Frequency
Scheme
Control thrashing by establishing acceptable page-fault
rate.
If page fault rate too low, process loses frame.
If page fault rate too high, process needs and gains a
frame.
76
Demand Paging Issues
Prepaging
Tries to prevent high level of initial paging.
E.g. If a process is suspended, keep list of pages in
working set and bring entire working set back before
restarting process.
Tradeoff - page fault vs. prepaging - depends on how many
pages brought back are reused.
Page Size Selection
fragmentation
table size
I/O overhead
locality
Demand Paging Issues
Program Structure
Array A[1024,1024] of integer
Assume each row is stored on one page
Assume only one frame in memory
Program 1
for j := 1 to 1024 do
for i := 1 to 1024 do
A[i,j] := 0;
1024 * 1024 page faults
Program 2
for i := 1 to 1024 do
for j:= 1 to 1024 do
A[i,j] := 0;
1024 page faults
Demand Paging Issues