Memory Management
Memory Management
SYSTEM
TUESDAY AND THURSDAY
5:30-7:00 PM
Memory Management
• The memory management algorithms vary from a primitive bare-machine
approach to a strategy that uses paging. Each approach has its own
advantages and disadvantages.
• Memory management is the functionality of an operating system which handles or
manages primary memory and moves processes back and forth between main
memory and disk during execution.
• Memory management keeps track of each and every memory location, regardless of
either it is allocated to some process or it is free.
• It checks how much memory is to be allocated to processes.
• It decides which process will get memory at what time.
• It tracks whenever some memory gets freed or unallocated and correspondingly it
updates the status.
Memory
• Compile time. If you know at compile time where the process will reside in memory, then
absolute code can be generated. For example, if you know that a user process will reside
starting at location R, then the generated compiler code will start at that location and extend
up from there. If, at some later time, the starting location changes, then it will be necessary
to recompile this code.
• Load time. If it is not known at compile time where the process will reside in memory,
then the compiler must generate relocatable code. In this case, final binding is delayed until
load time. If the starting address changes, we need only reload the user code to incorporate
this changed value.
• Execution time. If the process can be moved during its execution from one memory
segment to another, then binding must be delayed until run time. Special hardware must be
available for this scheme to work. Most operating systems use this method.
Process Address Space
• The process address space is the set of logical addresses that a process
references in its code.
• The operating system takes care of mapping the logical addresses to physical
addresses at the time of memory allocation to the program.
• There are three types of addresses used in a program before and after memory is allocated:
• Symbolic addresses
• The addresses used in a source code. The variable names, constants, and instruction labels are the basic
elements of the symbolic address space.
• Relative addresses
• At the time of compilation, a compiler converts symbolic addresses into relative addresses.
• Physical addresses
• The loader generates these addresses at the time when a program is loaded into main memory.
• Virtual and physical addresses are the same in compile-time and load-time
address-binding schemes. Virtual and physical addresses differ in execution-
time address-binding scheme.
• The set of all logical addresses generated by a program is referred to as
a logical address space. The set of all physical addresses corresponding to
these logical addresses is referred to as a physical address space.
Logical Versus Physical Address Space
• An address generated by the CPU is commonly referred to as a logical address, whereas
an address seen by the memory unit— that is, the one loaded into the memory-address
register of the memory—is commonly referred to as a physical address.
• Binding addresses at either compile or load time generates identical logical and physical
addresses.
• The execution-time address-binding scheme results in differing logical and physical
addresses. In this case, we usually refer to the logical address as a virtual address. We use
logical address and virtual address interchangeably in this text.
• The set of all logical addresses generated by a program is a logical address space.
• The set of all physical addresses corresponding to these logical addresses is a physical address space.
Thus, in the execution-time address-binding scheme, the logical and physical address spaces differ.
The run-time mapping from virtual to physical
addresses is done by a hardware device called
the Memory-Management Unit (MMU).
0 0 0 0 4 4 4 4 4 4 2 2
2 2 2 2 0 0 0 0 0 0 0
1 1 1 1 1 1 3 3 3 3
6 6 6 6 6 6 1 1 1
X X X X X X H H X X X H
Optimal Page algorithm
• An optimal page-replacement
algorithm has the lowest page-fault
rate of all algorithms. An optimal
page-replacement algorithm exists,
and has been called OPT or MIN.
• Replace the page that will not be used
for the longest period of time. Use
the time when a page is to be used.
0 2 1 6 4 0 1 0 3 1 2 1
0 0 0 0 0 0 0 0 3 3 3 3
2 2 2 2 2 2 2 2 2 2 2
1 1 1 1 1 1 1 1 1 1
6 4 4 4 4 4 4 4 4
X X X X X H H H X H H H
Least Recently Used (LRU) algorithm
• Page which has not been
used for the longest time in
main memory is the one
which will be selected for
replacement.
• Easy to implement, keep a
list, replace pages by
looking back into time.
0 2 1 6 4 0 1 0 3 1 2 1
0 0 0 0 4 4 4 4 4 4 2 2
2 2 2 2 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1 1
6 6 6 6 6 3 3 3 3
X X X X X X H H X H X H
Page Buffering algorithm
• To get a process start quickly, keep a pool of free frames.
• On page fault, select a page to be replaced.
• Write the new page in the frame of free pool, mark the page table and restart
the process.
• Now write the dirty page out of disk and place the frame holding replaced
page in free pool.
• Least frequently Used(LFU) algorithm
• The page with the smallest count is the one which will be selected for replacement.
• This algorithm suffers from the situation in which a page is used heavily during the
initial phase of a process, but then is never used again.
• Most frequently Used(MFU) algorithm
• This algorithm is based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.