Main Memory
Main Memory
Memory Management
■ Background
■ Swapping
■ Contiguous Memory Allocation
■ Paging
■ Structure of the Page Table
■ Segmentation
■ Example: The Intel Pentium
■ Main memory and registers are only storage CPU can access directly
■ Memory unit only sees a stream of addresses + read requests, or address + data
and write requests
■ Logical and physical addresses are the same in compile-time and load-time
address-binding schemes; logical (virtual) and physical addresses differ in
execution-time address-binding scheme
■ Logical address space is the set of all logical addresses generated by a
program
■ Physical address space is the set of all physical addresses generated by a
program
■ To start, consider simple scheme where the value in the relocation register is
added to every address generated by a user process at the time it is sent to
memory
– Base register now called relocation register
– MS-DOS on Intel 80x86 used 4 relocation registers
■ The user program deals with logical addresses; it never sees the real physical
addresses
– Execution-time binding occurs when reference is made to location in memory
– Logical address bound to physical addresses
■ Useful when large amounts of code are needed to handle infrequently occurring
cases
■ If next processes to be put on CPU is not in memory, need to swap out a process and swap in
target process
■ Context switch time can then be very high
■ Can reduce if reduce size of memory swapped – by knowing how much memory really being
used
– System calls to inform OS of memory use via request memory and release memory
■ Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size
– Produces the smallest leftover hole
■ Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
■ To run a program of size N pages, need to find N free frames and load program
p d
m-n n
■ The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffers (TLBs)
■ Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies
each process to provide address-space protection for that process
– Otherwise need to flush at every context switch
■ TLBs typically small (64 to 1,024 entries)
■ On a TLB miss, value is loaded into the TLB for faster access next time
– Replacement policies must be considered
– Some entries can be wired down for permanent fast access
■ Hierarchical Paging
■ Since the page table is paged, the page number is further divided into:
– a 12-bit page number
– a 10-bit page offset
p1 p2 d
12 10 10
■ where p1 is an index into the outer page table, and p2 is the displacement within
the page of the inner page table
■ Known as forward-mapped page table
p1 p2 d
42 10 12
■ Each element contains (1) the virtual page number (2) the value of the
mapped page frame (3) a pointer to the next element
■ Virtual page numbers are compared in this chain searching for a match
– If a match is found, the corresponding physical frame is extracted
■ Entry consists of the virtual address of the page stored in that real memory location, with
information about the process that owns that page
■ Decreases memory needed to store each page table, but increases time needed to search the
table when a page reference occurs
■ Use hash table to limit the search to one — or at most a few — page-table entries
– TLB can accelerate access
4
1
3 2
4