Memory Management
Memory Management
Monoprogramming
❑Memory contains only one program at any point of time.
❑When CPU is executing the program and I/O operation is encountered,
then the program goes to I/O devices, during that time CPU sits idle.
❑CPU is not effectively used i.e., CPU utilization is poor.
2
Multiprogramming
❑when one user program contains I/O operations, CPU switches to the next
user program. Thus, CPU is made busy all the times.
❑Increases CPU utilization by organizing jobs(programs), so that CPU is
busy at all times by executing some user program or the other.
❑In mutliprogramming, the OS picks one of the jobs from job pool and
sends the job to CPU. When an I/O operation is encountered in that job, OS
allocates I/O devices for that job and allocates CPU to the next job in the job
pool.
3
Modeling Multiprogramming
Degree of multiprogramming
4
Memory Management
❑Is the task carried out by the OS and hardware to accommodate multiple
processes in main memory
❑If only a few processes can be kept in main memory, then much of the time
all processes will be waiting for I/O and the CPU will be idle
❑Hence, memory needs to be allocated efficiently in order to pack as many
processes into memory as possible
5
Why is Memory Management Required?
Relocation
programmer cannot know where the program will be placed in memory when it is executed
a process may be (often) relocated in main memory due to swapping
swapping enables the OS to have a larger pool of ready-to-execute processes
memory references in code (for both instructions and data) must be translated to actual
physical memory address
Protection
processes should not be able to reference memory locations in another process without
permission
impossible to check addresses at compile time in programs since the program could be
relocated
address references must be checked at run time by hardware
6
Why is Memory Management Required?
Sharing
must allow several processes to access a common portion of main memory without
compromising protection
cooperating processes may need to share access to the same data structure
better to allow each process to access the same copy of the program rather than have their own
separate copy
Logical Organization
users write programs in modules with different characteristics
instruction modules are execute-only
data modules are either read-only or read/write
some modules are private others are public
To effectively deal with user programs, the OS and hardware should support a basic form
of module to provide the required protection and sharing
7
Why is Memory Management Required?
Physical Organization
secondary memory is the long term store for programs and data while
main memory holds program and data currently in use
moving information between these two levels of memory is a major
concern of memory management (OS)
it is highly inefficient to leave this responsibility to the application programmer
8
Simple Memory Management
❑Simpler case where there is no virtual memory
❑An executing process must be loaded entirely in main memory (if overlays are not
used)
❑simple memory management techniques
fixed partitioning
Variable partitioning
dynamic partitioning
9
Fixed Partitioning
10
Fixed Partition
❑any process whose size is less than or equal to a partition size can be loaded into
the partition
❑if all partitions are occupied, the operating system can swap a process out of a
partition
❑Main memory use is inefficient. Any program, no matter how small, occupies an
entire partition. This is called internal fragmentation.
❑a program may be too large to fit in a partition. The programmer must then
design the program with overlays
when the module needed is not present the user program must load that module
into the program’s partition, overlaying whatever program or data are there
11
Equal-size Partition
❑If there is an available partition, a process can be loaded into that
partition
because all partitions are of equal size, it does not matter which
partition is used
❑If all partitions are occupied by blocked processes, choose one process
to swap out to make room for the new process
12
Unequal-size partitions: use of multiple queues
Assign each process to the smallest
partition within which it will fit
A queue for each partition size
tries to minimize internal
fragmentation
Problem: some queues will be
empty if no processes within a size
range is present
13
Unequal-size partitions: use of a single queue
14
Variable Partitions
More complex management problem
▪ Must track free and used memory
▪ Need data structures to do tracking
▪ What holes are used for a process?
▪External fragmentation
▪ memory that is outside any partition and is too small to be usable
by any process
O O O
S S S
process 4
process 2 Process 2 Process
Terminate 4
process 3 s process 3 Starts process 3
15
Memory Allocation – Mechanism
MM system maintains data about free and allocated memory alternatives
Bit maps – 1 bit per “allocation unit”
Linked Lists – free list updated and coalesced when not allocated to a process
Compaction
Moving things around so that holes can be consolidated
Expensive in OS time
16
Memory Management - Maps
18
Dynamic Partitioning
❑Partitions are of variable length and number
❑Each process is allocated exactly as much memory as it requires
❑Eventually holes are formed in main memory. This is called external
fragmentation
❑Must use compaction to shift processes so they are contiguous and all free
memory is in one block
19
Dynamic Partitioning: an example
22
Placement Algorithm
❑Next-fit often leads to allocation of the largest block at the end of
memory
❑First-fit favors allocation near the beginning: tends to create less
fragmentation then Next-fit
❑Best-fit searches for smallest block: the fragment left behind is small
as possible
main memory quickly forms holes too small to hold any process: compaction
generally needs to be done more often
23
Relocation and Protection
❑Cannot be sure where program will be loaded in memory
❑address locations of variables, code routines cannot be absolute
❑must keep a program out of other processes’ partitions
24
Fragmentation
❑External Fragmentation – total memory space exists to satisfy a request,
but it is not contiguous
❑Internal Fragmentation – allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition,
but not being used
❑First fit analysis reveals that given N blocks allocated, 0.5 N blocks lost to
fragmentation
❑1/3 may be unusable -> 50-percent rule
25
Fragmentation
Reduce external fragmentation by compaction
Shuffle memory contents to place all free memory together in one
large block
Compaction is possible only if relocation is dynamic, and is done at
execution time
I/O problem
Latch job in memory while it is involved in I/O
Do I/O only into OS buffers
26
Swapping
29
Memory Management with Linked Lists
30
Virtual Memory
❑A computer can address more memory than the amount physically
installed on the system.
❑This extra memory is actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM
❑main visible advantage of this scheme is that programs can be larger than
physical memory.
❑ it allows us to extend the use of physical memory by using disk
❑ it allows us to have memory protection, because each virtual address is
translated to a physical address.
31
Example of Virtual Memory
32
Paging
❑Physical address space of a process can be noncontiguous; process is allocated physical memory whenever
the latter is available
❑Avoids external fragmentation
❑Avoids problem of varying sized memory chunks
❑Divide physical memory into fixed-sized blocks called frames
❑Size is power of 2, between 512 bytes and 16 Mbytes
❑Divide logical memory into blocks of same size called pages
❑Keep track of all free frames
❑To run a program of size N pages, need to find N free frames and load program
❑Set up a page table to translate logical to physical addresses
❑Backing store likewise split into pages
❑Still have Internal fragmentation
33
Address Translation Scheme
Address generated by CPU is divided into:
Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
Page offset (d) – combined with base address to define the physical memory
address that is sent to the memory unit
34
Paging Hardware
35
Paging Model of Logical and Physical Memory
36
Paging Example
37
Free Frames
38
Page Table
The relation between virtual
addresses and physical
memory addresses is given by
page table
39
Typical page table entry
40
Implementation of Page Table
❑Page table is kept in main memory
❑Page-table base register (PTBR) points to the page table
❑Page-table length register (PTLR) indicates size of the page table
❑In this scheme every data/instruction access requires two memory accesses
❑One for the page table and one for the data / instruction
❑The two memory access problem can be solved by the use of a special fast-
lookup hardware cache called associative memory or translation look-
aside buffers (TLBs)
41
Implementation of Page Table
❑Some TLBs store address-space identifiers (ASIDs) in each TLB entry –
uniquely identifies each process to provide address-space protection for that
process
❑Otherwise need to flush at every context switch
❑TLBs typically small (64 to 1,024 entries)
❑On a TLB miss, value is loaded into the TLB for faster access next time
❑Replacement policies must be considered
❑Some entries can be wired down for permanent fast access
42
Associative Memory
❑Associative memory – parallel search
44
Memory Protection
❑Memory protection implemented by associating protection bit with each
frame to indicate if read-only or read-write access is allowed
❑Can also add more bits to indicate page execute-only, and so on
❑Valid-invalid bit attached to each entry in the page table:
❑“valid” indicates that the associated page is in the process’ logical address space,
and is thus a legal page
❑“invalid” indicates that the page is not in the process’ logical address space
❑Or use page-table length register (PTLR)
❑Any violations result in a trap to the kernel
45
Page Replacement Algorithms
❑Page fault forces choice
❑which page must be removed
❑make room for incoming page
46
Optimal Page Replacement Algorithm
❑Replace page needed at the farthest point in future
❑Optimal but unrealizable
❑Estimate by …
❑logging page use on previous runs of process
❑although this is impractical
47
FIFO Page Replacement Algorithm
❑Disadvantage
❑page in memory the longest may be often used
48
Belady’s Anomaly
❑Also called FIFO Anomaly
❑Usually, on increasing the number of frames allocated to the process virtual
memory, the process execution is faster, because fewer page faults occur.
❑Sometimes, the reverse happens, i.e. the execution time increases even
when more frames are allocated to the process. This is Belady’s Anomaly
❑Increasing the number of page frames results in an increase in the number
of page faults for a given memory access pattern. This phenomenon is
commonly experienced when using the First IN First Put (FIFO) page
replacement algorithm
49
Belady’s Analomy
50
Second Chance Page Replacement Algorithm
51
Least Recently Used (LRU)
❑Assume pages used recently will used again soon
❑throw out page that has been unused for longest time
52
The Clock Page Replacement Algorithm
53
Not Recently Used Page Replacement Algorithm
❑Each page has Reference bit, Modified bit
❑ bits are set when page is referenced, modified
❑Pages are classified
❑ not referenced, not modified
❑ not referenced, modified
❑ referenced, not modified
❑ referenced, modified
54
The Working Set Page Replacement Algorithm (1)
The working set is the set of pages used by the k most recent memory
references
w(k,t) is the size of the working set at time, t
55
The Working Set Page Replacement Algorithm (2)
58
Page Fault Handling
❑Hardware traps to kernel
❑General registers saved
❑OS determines which virtual page needed
❑OS checks validity of address, seeks page frame
❑If selected frame is dirty, write it to disk
59
Page Fault Handling
❑OS brings schedules new page in from disk
❑Page tables updated
❑Faulting instruction backed up to when it began
❑Faulting process scheduled
❑Registers restored
❑Program continues
60
Segmentation
❑Each program is subdivided into blocks of non-equal size called segments
❑When a process gets loaded into main memory, its different segments can be located anywhere
❑Each segment is fully packed with instructs/data: no internal fragmentation
❑There is external fragmentation; it is reduced when using small segments
❑In contrast with paging, segmentation is visible to the programmer
❑provided as a convenience to organize logically programs (ex: data in one segment, code in another segment)
❑must be aware of segment size limit
❑The OS maintains a segment table for each process. Each entry contains:
❑ the starting physical addresses of that segment.
❑the length of that segment (for protection)
61
Segmentation
62
Segmentation
63
Segmentation
64
Segmentation
❑Protection. With each entry in segment table associate:
❑validation bit = 0 ⇒ illegal segment
❑read/write/execute privileges
❑Protection bits associated with segments; code sharing occurs at segment
level.
❑Since segments vary in length, memory allocation is a dynamic storage-
allocation problem
❑Common in early minicomputers
❑Small amount of additional hardware – 4 or 8 segments
❑Used effectively in classical Unix
❑Good idea that has persisted and supported in current hardware and OSs
❑X86 supports segments
❑Linux supports segments
65
Logical Address used in Segmentation
❑When a process enters the Running state, a CPU register gets loaded with the
starting address of the process’s segment table.
❑Presented with a logical address (segment number, offset) = (n,m), the CPU
indexes (with n) the segment table to obtain the starting physical address k and the
length l of that segment
❑The physical address is obtained by adding m to k (in contrast with paging)
❑the hardware also compares the offset m with the length l of that segment to determine
if the address is valid
66
Logical-to-Physical Address Translation in
segmentation
67
Comparison between Paging and Segmentation
68
Implementation of Pure Segmentation
70
Segmentation with Paging: MULTICS (2)
71
Segmentation with Paging: MULTICS (3)
73
Any Queries?
74