Memory Management
Memory Management
Learning Objectives
The basic functionality of the four memory allocation schemes presented in this chapter: Single user,fixed partitions, dynamic partitions, relocatable dynamic partitions Best-fit memory allocation as well as first-fit memory allocation schemes How a memory list keeps track of available memory
2
Introduction
Management of main memory is critical Entire system performance dependent on two items
How much memory is available Optimization of memory during job processing
Process
Fixed Partitions
Commercially available in 1950s and 1960s Main memory is partitioned At system startup One contiguous partition per job Permits multiprogramming Partition sizes remain static Must shut down computer system to reconfigure Requires: Protection of the jobs memory space Matching job size with partition size
9
Fixed Partitions
A program may be too big for any partition. Internal fragmentation is significant. Unequal-sized partitions can make the problem less severe, but cannot do away with it. Placement is trivial for fixed-size, but can be done in one of two ways for variable sizes.
UNUSED SPACE
Process B UNUSED SPACE Process C
11
Fixed Partitioning
Partition main memory into a set of nonoverlapping areas or partitions Partitions can be of equal or unequal sizes
assigns processes to the best fit partition separate queue for each partition size tries to minimize internal fragmentation
When its time to load a process into main memory, the smallest available partition that will hold the process is selected No danger that a process will wait in one queue while space is present in another
15
16
Dynamic Partitions
Main memory is not partitioned
Jobs given memory requested when loaded One contiguous partition per job
Disadvantages
Full memory utilization only during loading of initial jobs Subsequent allocation: memory waste External fragmentation: fragments between blocks
18
19
Placement Algorithms
need for compaction Possible algorithms best fit: choose smallest hole first fit: choose first hole (that fits) from beginning next-fit: choose first hole from last choice Worst fit: opposite of best fit Placement isnt an issue with fixed partitions
Replacement Algorithms
When all processes in main memory are blocked, the OS can swap in a suspendedready process, if one exists. The OS must choose which process to replace (swap out or suspend) to make room for the incoming process. Considerations: state (blocked or ready), size (relative to process being swapped in), priority, ... Later we will discuss replacement algorithms for virtual memory management schemes.
23
24
25
26
27
Process repeats
28
30
Deallocation
Deallocation: freeing allocated memory space For fixed-partition system:
Straightforward process Memory Manager resets the status of jobs memory block to free upon job completion Any code may be used Example code: binary values with zero indicating free and one indicating busy
33
Deallocation (continued)
For dynamic-partition system:
Algorithm tries to combine free areas of memory More complex
34
Memory block size changes to show its new size for the new free space
Combined total of the two free partitions Example: (200 + 5)
35
36
37
38
39
40
42
43
44
46
48
49
50
51
52
Busy list
Must show the new locations for all of the jobs already in process that were relocated
Relocation register
Contains the value that must be added to each address referenced in the program Must be able to access the correct memory addresses after relocation If the program is not relocated, zero value stored in the programs relocation register
54
Compaction entails more overhead Goal: optimize processing time and memory use while keeping overhead as low as possible
55
Summary
Four memory management techniques
Single-user systems, fixed partitions, dynamic partitions, and relocatable dynamic partitions
Each places severe restrictions on job size Sufficient for first three generations of computers
56
Summary (continued)
New modern memory management trends in late 1960s and early 1970s
Discussed in next chapter Common characteristics of memory schemes
Programs are not stored in contiguous memory locations Not all segments reside in memory during execution of job
57
Learning Objectives
The basic functionality of the memory allocation methods covered in this chapter: paged, demand paging, segmented, and segmented/demand paged memory allocation The influence that these page allocation methods have had on virtual memory The difference between a first-in first-out page replacement policy, a least-recentlyused page replacement policy, and a clock page replacement policy
59
Introduction
Evolution of virtual memory
Paged, demand paging, segmented, segmented/demand paging Foundation for current virtual memory methods
Introduction (continued)
Page replacement policies
First-In First-Out Least Recently Used Most-Frequently used Clock replacement and bit-shifting Mechanics of paging The working set
Virtual memory
Concepts and advantages
Cache memory
Concepts and advantages
62
63
Disk Structure
65
(continued)
66
67
68
69
(continued)
Address resolution
Translating job space address into physical address Relative address into absolute address
70
71
Disadvantages
Increased overhead from address resolution Internal fragmentation in last page Must store entire job in memory location
72
Demand Paging
Pages brought into memory only as needed
Removes restriction: entire program in memory Requires high-speed page access
73
76
Disadvantages
Increased overhead caused by tables and page interrupts
80
First-In First-Out
Removes the longest page in memory Efficiency Ratio of page faults to page requests FIFO example: not so good Efficiency is 9/11 or 82% FIFO anomaly More memory frames does not lead to better performance
82
83
84
85
86
LRU Implementation
Bit-shifting technique (Fig. 3.11, p81)
Uses 8-bit reference byte and bit-shifting technique The reference bit for each page is updated with every CPU clock tick. If any page is referenced, its leftmost reference bit will be set to 1. When a page fault occurs, the LRU policy selects the page with the smallest value in its reference byte because that would be the least recently used.
87
89
91
93
94
95
96
97
Instructions with segments ordered sequentially Segments not necessarily stored contiguously
98
Advantages
Internal fragmentation is removed Memory allocated dynamically
Disadvantages
Difficulty managing variable-length segments in secondary storage External fragmentation
99
101
102
Disadvantages
Table handling overhead Memory needed for page and segment tables
103
Associative memory
A special type of computer memory used in certain very high speed searching applications. It is also known as content-addressable memory (CAM) Unlike standard computer memory (RAM) in which the user supplies a memory address and the RAM returns the data word stored at that address, a CAM is designed such that the user supplies a data word and the CAM searches its entire memory to see if that data word is stored anywhere in it. If the data word is found, the CAM returns a list of one or more storage addresses where the word was found
From: http://en.wikipedia.org/wiki/Content-addressable_memory
104
Virtual Memory
Allows program execution even if not stored entirely in memory Requires cooperation between memory manager and processor hardware Advantages Job size not restricted to size of main memory Memory used more efficiently Allows an unlimited amount of multiprogramming Eliminates external fragmentation and minimizes internal fragmentation 106
Disadvantages
Increased processor hardware costs Increased overhead for handling paging interrupts Increased software complexity to prevent thrashing
107
108
Cache Memory
Small high-speed intermediate memory unit Performance of computer system increased
Memory access time significantly reduced Faster processor access compared to main memory Stores frequently used data and instructions
109
110
Efficiency measures
Cache hit ratio (h)
= # of requests found in cache / total # of requests * 100
Miss ratio (1-h) Average memory access time = AvgCacheAccessTime + AvgMemACCTime * (1-h)
Because CPU always checks cache memory first, if miss, then check main memory.
111
Address Types
A physical address or absolute address refers to a physical location in main memory. A logical address is a reference to a memory location independent of the current assignment of data to memory. (Stallings) Compilers produce code in which all memory references are expressed as logical addresses. A relative address is a kind of logical address in which the address is expressed relative to some known point in the program. Usually, the first location is address zero and all other addresses are offsets from this address.
Address Translation
Programs are loaded in main memory with all memory references in relative form Physical addresses are calculated on the fly as the instructions are executed. This process is called address translation, or address mapping. For adequate performance, the translation from relative to physical address must by done by hardware.
Address Translation
For this to work, the compiler must be able to assign logical addresses that can be used by the hardware to determine page number and offset. That is, the logical address should be a two-part value, (p,d), where p is the logical page number and d is an offset, or displacement, within the page.
Logical Addresses
This is easily done by requiring the size of a page (and the size of a frame) to be a power of two. Now, the relative address (defined relative to the origin of the program) and the logical address (p,d) will be the same. To try to understand this, consider an analogy to decimal numbering.
CAUTION
The previous example was just an analogy there are no computers with power-of-ten addressing systems.
Address Translation
The decimal example works because the logical address can be easily divided into page-number and displacement fields, since each digit represents a power of 10. Binary addresses can also be divided into page-number and displacement fields, where each digit represents a power of 2.
Example
Consider relative address 1502. It is located on logical page 1, at offset 478. (p, d) = 1,478. The binary representation of 1502 is 0000010111011110. Divide this into a six-bit page number field: 000001 = 1 and a 10-bit displacement field: 0111011110 = 478. When the MM hardware is presented with a binary address it can easily get the two fields.
Simple Segmentation
Segmentation also divides the process address space into chunks, now called segments. Segments are based on logical, not physical, characteristics of the program. One segment might contain the main program, others a group of related functions, etc. Each segment can be a different size.
The hardware uses the segment number as an index into the segment table. Segment start address + displacement = physical address.
Summary
Paged memory allocation
Efficient use of memory Allocate jobs in noncontiguous memory locations Problems Increased overhead Internal fragmentation
132
Summary (continued)
Segmented/demand paged memory
Problems solved Compaction, external fragmentation, secondary storage handling
Associative memory
Used to speed up the process
Virtual memory
Programs execute if not stored entirely in memory Jobs size no longer restricted to main memory size
Cache memory
CPU can execute instruction faster
133
134