0% found this document useful (0 votes)
243 views

Memory Management

The document summarizes early computer memory management systems. It discusses four approaches: single-user systems with one process in memory at a time; fixed partitions that divide memory into static blocks; dynamic partitions that allocate memory as needed but can lead to fragmentation; and relocatable dynamic partitions that allow compaction to reduce fragmentation. It also covers best-fit and first-fit allocation algorithms and the three cases for deallocating memory blocks in a dynamic system: joining adjacent blocks, joining blocks between two free areas, and marking an isolated block as free.

Uploaded by

Herman Komba
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
243 views

Memory Management

The document summarizes early computer memory management systems. It discusses four approaches: single-user systems with one process in memory at a time; fixed partitions that divide memory into static blocks; dynamic partitions that allocate memory as needed but can lead to fragmentation; and relocatable dynamic partitions that allow compaction to reduce fragmentation. It also covers best-fit and first-fit allocation algorithms and the three cases for deallocating memory blocks in a dynamic system: joining adjacent blocks, joining blocks between two free areas, and marking an isolated block as free.

Uploaded by

Herman Komba
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 134

Memory Management: Early Systems

Learning Objectives
The basic functionality of the four memory allocation schemes presented in this chapter: Single user,fixed partitions, dynamic partitions, relocatable dynamic partitions Best-fit memory allocation as well as first-fit memory allocation schemes How a memory list keeps track of available memory
2

Learning Objectives (continued)


The importance of deallocation of memory in a dynamic partition system The importance of the bounds register in memory allocation schemes The role of compaction and how it improves memory allocation efficiency
3

Introduction
Management of main memory is critical Entire system performance dependent on two items
How much memory is available Optimization of memory during job processing

This chapter introduces:


Memory manager Four types of memory allocation schemes Single-user systems Fixed partitions Dynamic partitions Relocatable dynamic partitions

What is Memory Management?


MM is the task of storing and managing multiple processes in main memory. It is carried out by the OS, with support from the hardware. Memory is divided into kernel space and user space, to accommodate both the kernel and user processes. MM is concerned with managing user space.

Single-User Contiguous Scheme


Commercially available in 1940s and 1950s Entire program loaded into memory Contiguous memory space allocated as needed Jobs processed sequentially Memory manager performs minimal work
Register to store the base address Accumulator to track program size

Single-User Contiguous Scheme (continued)


Disadvantages
No support for multiprogramming or networking Not cost effective Program size must be less than memory size to execute

Single-User Contiguous scheme


Memory management simple OS gets a fixed part of memory One process executes at a time Process is always loaded at address 0 Maximum address is total memory size (minus) OS size Limited in capability
UNUSED SPACE

Process

Fixed Partitions
Commercially available in 1950s and 1960s Main memory is partitioned At system startup One contiguous partition per job Permits multiprogramming Partition sizes remain static Must shut down computer system to reconfigure Requires: Protection of the jobs memory space Matching job size with partition size
9

Fixed Partitions
A program may be too big for any partition. Internal fragmentation is significant. Unequal-sized partitions can make the problem less severe, but cannot do away with it. Placement is trivial for fixed-size, but can be done in one of two ways for variable sizes.

Multiple Process Memory


Divide memory into number of separate fixed areas
OS occupies one area UNUSED SPACE Process A

Each can hold one processes


Memory is wasted if process is smaller than partition Internal Fragmentation

UNUSED SPACE
Process B UNUSED SPACE Process C
11

Prevent a process being run


If there isn't a partition big enough

Fixed Partitioning
Partition main memory into a set of nonoverlapping areas or partitions Partitions can be of equal or unequal sizes

Placement Algorithm - Fixed


Multiple queues

assigns processes to the best fit partition separate queue for each partition size tries to minimize internal fragmentation

Placement Algorithm - Fixed


Single queue

When its time to load a process into main memory, the smallest available partition that will hold the process is selected No danger that a process will wait in one queue while space is present in another

Fixed Partitions (continued)


Memory manager allocates memory space to jobs
Uses a table

15

Fixed Partitions (continued)

16

Fixed Partitions (continued)


Disadvantages
Requires contiguous loading of entire program Job allocation method First available partition with required size To work well: All jobs must be same size and memory size known ahead of time Arbitrary partition size leads to undesired results Partition too small Large jobs have longer turnaround time Partition too large Memory waste: internal fragmentation (fragment within a block) 17

Dynamic Partitions
Main memory is not partitioned
Jobs given memory requested when loaded One contiguous partition per job

Job allocation method


First come, first serve allocation method Memory waste: comparatively small

Disadvantages
Full memory utilization only during loading of initial jobs Subsequent allocation: memory waste External fragmentation: fragments between blocks

18

19

Placement Algorithms
need for compaction Possible algorithms best fit: choose smallest hole first fit: choose first hole (that fits) from beginning next-fit: choose first hole from last choice Worst fit: opposite of best fit Placement isnt an issue with fixed partitions

Placement Algorithm: Observations


First-fit: easy, usually the fastest and best. Next-fit: slightly worse, often leads to allocation from the largest block at the end of memory Best-fit: usually the worst. The fragments it creates are the smallest possible, least likely to be useful. Compaction generally needs to be done more often than with other schemes.

Replacement Algorithms
When all processes in main memory are blocked, the OS can swap in a suspendedready process, if one exists. The OS must choose which process to replace (swap out or suspend) to make room for the incoming process. Considerations: state (blocked or ready), size (relative to process being swapped in), priority, ... Later we will discuss replacement algorithms for virtual memory management schemes.

Best-Fit Versus First-Fit Allocation


Two methods for free space allocation
First-fit memory allocation: first partition fitting the requirements Leads to fast allocation of memory space Keep free/busy lists organized by memory location (low order to high-order). Best-fit memory allocation: smallest partition fitting the requirements Results in least wasted space Internal fragmentation size reduced, but not eliminated

Fixed and dynamic memory allocation schemes use both methods

23

Best-Fit Versus First-Fit Allocation (continued)


First-fit memory allocation
Advantage: faster in making allocation Disadvantage: leads to memory waste

Best-fit memory allocation


Advantage: makes the best use of memory space Keep free/busy lists ordered by size (smallest to largest). Disadvantage: slower in making allocation

24

Best-Fit Versus First-Fit Allocation (continued)

25

Best-Fit Versus First-Fit Allocation (continued)

26

Best-Fit Versus First-Fit Allocation (continued)


Algorithm for first-fit
Assumes memory manager keeps two lists One for free memory One for busy memory blocks Loop compares the size of each job to the size of each free memory block Until a block is found that is large enough to fit the job Job stored into that block of memory Memory Manager moves out of the loop Fetches next job from the entry queue

27

Best-Fit Versus First-Fit Allocation (continued)


Algorithm for first-fit (continued):
If entire list searched in vain
Then job is placed into waiting queue Otherwise, Memory Manager fetches next job

Process repeats

28

Best-Fit Versus First-Fit Allocation (continued)

First-fit: a request for a block of 200 spaces.


29

Best-Fit Versus First-Fit Allocation (continued)


Algorithm for best-fit
Goal
Find the smallest memory block into which the job will fit

Entire table searched before allocation

30

Best-Fit Versus First-Fit Allocation (continued)

Best-fit: a request for a block of 200 spaces.


31

Best-Fit Versus First-Fit Allocation (continued)


Hypothetical allocation schemes
Next-fit: starts searching from last allocated block, for next available block when a new job arrives Worst-fit: allocates largest free available block to new job
Opposite of best-fit Good way to explore theory of memory allocation Not best choice for an actual system
32

Deallocation
Deallocation: freeing allocated memory space For fixed-partition system:
Straightforward process Memory Manager resets the status of jobs memory block to free upon job completion Any code may be used Example code: binary values with zero indicating free and one indicating busy
33

Deallocation (continued)
For dynamic-partition system:
Algorithm tries to combine free areas of memory More complex

Three dynamic partition system cases


Case 1: When the block to be deallocated is adjacent to another free block Case 2: When the block to be deallocated is between two free blocks Case 3: When the block to be deallocated is isolated from other free blocks

34

Case 1: Joining Two Free Blocks


Blocks are adjacent List changes to reflect starting address of the new free block
Example: 7600 - the address of the first instruction of the job that just released this block

Memory block size changes to show its new size for the new free space
Combined total of the two free partitions Example: (200 + 5)
35

Case 1: Joining Two Free Blocks (continued)

36

Case 1: Joining Two Free Blocks (continued)

37

Case 2: Joining Three Free Blocks


Deallocated memory space
Between two free memory blocks

List changes to reflect starting address of new free block


Example: 7560 was smallest beginning address

Sizes of the three free partitions must be combined


Example: (20 + 20 + 205)

Combined entry (last of the three) given status of null


Example: 7600

38

Case 2: Joining Three Free Blocks (continued)

39

Case 2: Joining Three Free Blocks (continued)

40

Case 3: Deallocating an Isolated Block


Deallocated memory space Isolated from other free areas System determines released memory block status Not adjacent to any free blocks of memory Between two other busy areas System searches table for a null entry Occurs when memory block between two other busy memory blocks is returned to the free list
41

Case 3: Deallocating an Isolated Block (continued)

42

Case 3: Deallocating an Isolated Block (continued)

43

Case 3: Deallocating an Isolated Block (continued)

44

Case 3: Deallocating an Isolated Block (continued)

Compare to Table 2.8 45

Relocatable Dynamic Partitions


Memory Manager relocates programs
Gathers together all empty blocks

Compact the empty blocks


Make one block of memory large enough to accommodate some or all of the jobs waiting to get in

46

Relocatable Dynamic Partitions (continued)


Compaction: reclaiming fragmented sections of memory space
Every program in memory must be relocated
Programs become contiguous

Operating system must distinguish between addresses and data values


Every address adjusted to account for the programs new location in memory Data values left alone
47

Relocatable Dynamic Partitions (continued)

48

Relocatable Dynamic Partitions (continued)

49

50

51

Relocatable Dynamic Partitions (continued)


Compaction issues:
What goes on behind the scenes when relocation and compaction take place? What keeps track of how far each job has moved from its original storage area? What lists have to be updated?

52

Relocatable Dynamic Partitions (continued)


What lists have to be updated?
Free list
Must show the partition for the new block of free memory

Busy list
Must show the new locations for all of the jobs already in process that were relocated

Each job will have a new address


Exception: those already at the lowest memory locations
53

Relocatable Dynamic Partitions (continued)


Special-purpose registers used for relocation: Bounds register
Stores highest location accessible by each program

Relocation register
Contains the value that must be added to each address referenced in the program Must be able to access the correct memory addresses after relocation If the program is not relocated, zero value stored in the programs relocation register

54

Relocatable Dynamic Partitions (continued)


Compacting and relocating optimizes use of memory
Improves throughput

Options for timing of compaction:


When a certain percentage of memory is busy When there are jobs waiting to get in After a prescribed amount of time has elapsed

Compaction entails more overhead Goal: optimize processing time and memory use while keeping overhead as low as possible
55

Summary
Four memory management techniques
Single-user systems, fixed partitions, dynamic partitions, and relocatable dynamic partitions

Common requirements of four memory management techniques


Entire program loaded into memory Contiguous storage Memory residency until job completed

Each places severe restrictions on job size Sufficient for first three generations of computers
56

Summary (continued)
New modern memory management trends in late 1960s and early 1970s
Discussed in next chapter Common characteristics of memory schemes
Programs are not stored in contiguous memory locations Not all segments reside in memory during execution of job

57

Memory Management: Virtual Memory

Learning Objectives
The basic functionality of the memory allocation methods covered in this chapter: paged, demand paging, segmented, and segmented/demand paged memory allocation The influence that these page allocation methods have had on virtual memory The difference between a first-in first-out page replacement policy, a least-recentlyused page replacement policy, and a clock page replacement policy
59

Learning Objectives (continued)


The mechanics of paging and how a memory allocation scheme determines which pages should be swapped out of memory The concept of the working set and how it is used in memory allocation schemes The impact that virtual memory had on multiprogramming Cache memory and its role in improving system response time
60

Introduction
Evolution of virtual memory
Paged, demand paging, segmented, segmented/demand paging Foundation for current virtual memory methods

Improvement from early schemes:


Need for continuous program storage Need for placement of entire program in memory during execution Fragmentation Overhead due to relocation
61

Introduction (continued)
Page replacement policies
First-In First-Out Least Recently Used Most-Frequently used Clock replacement and bit-shifting Mechanics of paging The working set

Virtual memory
Concepts and advantages

Cache memory
Concepts and advantages
62

Paged Memory Allocation


Divides each incoming job into pages of equal size Best condition
Page size = Memory block (page frame) size = Size of disk section (sector) Sizes depend on operating system and disk sector size

Memory manager tasks prior to program execution


Determines number of pages in program Locates enough empty page frames in main memory Loads all program pages into page frames

Advantage of storing program non-contiguously


New problem: keeping track of jobs pages

63

Disk Structure

Paged Memory Allocation (continued)


Job Table Page Map Table Memory Map Table

65

Paged Memory Allocation

(continued)

Three tables for tracking pages


Job Table (JT) contains Size of each active job and Memory location where its PMT is stored One JT for the whole system Page Map Table (PMT) contains Page number and Its corresponding page frame address One PMT for each job Memory Map Table (MMT) contains Location for each page frame and Free/busy status One MMT for the whole system

66

Paged Memory Allocation (continued)

67

Paged Memory Allocation (continued)


Displacement (offset) of a line (statement)
To determine line distance from beginning of its page Used to locate line within its page frame A relative value

Determining page number and displacement of a line


Divide job space address by the page size Page number: integer quotient from the division Displacement: remainder from the division

68

Paged Memory Allocation (continued)

69

Paged Memory Allocation

(continued)

Steps to determining exact location of a line in memory


Determine page number and displacement of a line Refer to the jobs PMT Determine page frame containing required page Obtain address of the beginning of the page frame Multiply page frame number by page frame size Add the displacement (calculated in first step) to starting address of the page frame

Address resolution
Translating job space address into physical address Relative address into absolute address

70

Paged Memory Allocation (continued)

71

Paged Memory Allocation (continued)


Advantages
Allows job allocation in noncontiguous memory Efficient memory use

Disadvantages
Increased overhead from address resolution Internal fragmentation in last page Must store entire job in memory location

Page size selection is crucial


Too small: generates very long PMTs Too large: excessive internal fragmentation

72

Demand Paging
Pages brought into memory only as needed
Removes restriction: entire program in memory Requires high-speed page access

Exploits programming techniques


Modules written sequentially All pages not necessary needed simultaneously Examples User-written error handling modules Mutually exclusive modules Certain program options: mutually exclusive or not accessible Tables given fixed amount of space: fraction used
For example: symbol table in an assembly program.

73

Demand Paging (continued)


Allowed for wide availability of virtual memory concept
Provides appearance of almost infinite or nonfinite physical memory Jobs run with less main memory than required in paged memory allocation scheme Requires high-speed direct access storage device Works directly with CPU

Swapping: how and when pages passed in memory


Depends on predefined policies
74

Demand Paging (continued)


Memory Manager requires three tables
Job Table Page Map Table: additional three new fields If requested page is already in memory If page contents have been modified If page has been referenced recently Determines which page remains in main memory and which is swapped out Memory Map Table
75

Demand Paging (continued)

76

Demand Paging (continued)


Swapping Process
Exchanges resident memory page with secondary storage page Involves
Copying resident page to disk (if it was modified) Writing new page into the empty page frame

Requires close interaction between:


Hardware components Software algorithms Policy schemes
77

Demand Paging (continued)


Hardware instruction processing (p74) Page fault: failure to find page in memory Page fault handler (p75) Part of operating system Determines if empty page frames in memory
Yes: requested page copied from secondary storage No: swapping occurs

Deciding page frame to swap out if all are busy


Directly dependent on the predefined policy for 78 page removal

Demand Paging (continued)


Thrashing An excessive amount of page swapping between main memory and secondary storage Due to main memory page removal that is called back shortly thereafter Produces inefficient operation Occurs across jobs Large number of jobs competing for a relatively few number of free pages Occurs within a job In loops crossing page boundaries
79

Demand Paging (continued)


Advantages
Job no longer constrained by the size of physical memory (concept of virtual memory) Utilizes memory more efficiently than previous schemes Faster response

Disadvantages
Increased overhead caused by tables and page interrupts
80

Page Replacement Policies and Concepts


Policy to select page removal Crucial to system efficiency Page replacement polices First-In First-Out (FIFO) policy Best page to remove is one in memory longest Least Recently Used (LRU) policy Best page to remove is least recently accessed Mechanics of paging concepts The working set concept
81

First-In First-Out
Removes the longest page in memory Efficiency Ratio of page faults to page requests FIFO example: not so good Efficiency is 9/11 or 82% FIFO anomaly More memory frames does not lead to better performance

82

First-In First-Out (continued)

83

First-In First-Out (continued)

84

Least Recently Used


Removes the least recently accessed page Efficiency Causes either decrease in or same number of page interrupts (faults) Slightly better (compared to FIFO): 8/11 or 73% LRU is a stack algorithm removal policy Increasing main memory will cause either a decrease in or the same number of page interrupts Does not experience FIFO anomaly

85

Least Recently Used (continued)

86

LRU Implementation
Bit-shifting technique (Fig. 3.11, p81)
Uses 8-bit reference byte and bit-shifting technique The reference bit for each page is updated with every CPU clock tick. If any page is referenced, its leftmost reference bit will be set to 1. When a page fault occurs, the LRU policy selects the page with the smallest value in its reference byte because that would be the least recently used.
87

Second-Chance Algorithm (p80)


Another page replacement algorithm (Fig. 3.10) Each page attached with a reference bit and set to 0 initially Implemented by a circular queue with a reference bit pointer If a page is replaced, say page 3, then its reference bit will be set to 1 Next time, if the page 3 is checked, its reference bit will be set to 0 and it will not be replaced immediately. This will give page 3 a second chance to stay in main memory. If all page bits are set to 1, then the pointer cycles through the whole queue to reset all reference bits to 0. This means to give each page a second chance. Hence, the next page with reference bit is 1 will be replaced.

The Mechanics of Paging


Page swapping
Memory manage requires specific information Uses Page Map Table Information
Status bits of: 0 or 1

89

The Mechanics of Paging (continued)


Page map table bit meaning Status bit Indicates if page currently in memory Referenced bit Indicates if page referenced recently Used by LRU to determine page to swap Modified bit Indicates if page contents altered Used to determine if page must be rewritten to secondary storage when swapped out Four combinations of modified and referenced bits (00, 01, 10, 11). See Table 3.5. 90

The Mechanics of Paging (continued)

91

The Working Set


Set of pages residing in memory accessed directly without incurring a page fault
Improves performance of demand page scheme

Requires concept of locality of reference


Occurs in well-structured programs Only small fraction of pages needed during program execution

Time sharing systems considerations System decides


Number of pages comprising (consisting of) working set Maximum number of pages allowed for a working set 92

The Working Set (continued)

93

Segmented Memory Allocation


Each job divided into several segments
Segments are different sizes One for each module containing related functions

Reduces page faults


Segments loops not split over two or more pages

Main memory no longer divided into page frames


Now allocated dynamically

Programs structural modules determine segments


Each segment numbered when compiled/assembled Segment Map Table (SMT) generated

94

Segmented Memory Allocation (continued)

95

Segmented Memory Allocation (continued)

96

Segmented Memory Allocation (continued)

97

Segmented Memory Allocation (continued)


Memory Manager tracks segments using tables
Job Table Lists every job in process (one for whole system) Segment Map Table Lists details about each segment (one for each job) Memory Map Table Monitors allocation of main memory (one for whole system)

Instructions with segments ordered sequentially Segments not necessarily stored contiguously
98

Segmented Memory Allocation (continued)


Addressing scheme requirement
Segment number and displacement

Advantages
Internal fragmentation is removed Memory allocated dynamically

Disadvantages
Difficulty managing variable-length segments in secondary storage External fragmentation
99

Segmented/Demand Paged Memory Allocation


Subdivides segments into pages of equal size Smaller than most segments More easily manipulated than whole segments Logical benefits of segmentation Physical benefits of paging Segmentation problems removed Compaction, external fragmentation, secondary storage handling Addressing scheme requirements Segment number, page number within that segment, and displacement within that page
100

Segmented/Demand Paged Memory Allocation (continued)


Scheme requires four tables
Job Table Lists every job in process (one for the whole system) Segment Map Table Lists details about each segment (one for each job) Page Map Table Lists details about every page (one for each segment) Memory Map Table Monitors allocation of page frames in main memory (one for the whole system)

101

Segmented/Demand Paged Memory Allocation (continued)

102

Segmented/Demand Paged Memory Allocation (continued)


Advantages
Large virtual memory Segment loaded on demand Logical benefits of segmentation Physical benefits of paging

Disadvantages
Table handling overhead Memory needed for page and segment tables
103

Associative memory
A special type of computer memory used in certain very high speed searching applications. It is also known as content-addressable memory (CAM) Unlike standard computer memory (RAM) in which the user supplies a memory address and the RAM returns the data word stored at that address, a CAM is designed such that the user supplies a data word and the CAM searches its entire memory to see if that data word is stored anywhere in it. If the data word is found, the CAM returns a list of one or more storage addresses where the word was found
From: http://en.wikipedia.org/wiki/Content-addressable_memory

104

Associative Memory (continued)


In memory management applications, CAM stores segment map tables (SMP) and page map tables (PMT) To simultaneous searches contents in SMT and PMT for each active job Advantage Could speed up table searching Disadvantage High cost of complex hardware required to perform parallel searches

Virtual Memory
Allows program execution even if not stored entirely in memory Requires cooperation between memory manager and processor hardware Advantages Job size not restricted to size of main memory Memory used more efficiently Allows an unlimited amount of multiprogramming Eliminates external fragmentation and minimizes internal fragmentation 106

Virtual Memory (continued)


Advantages (continued)
Allows the sharing of code and data Facilitates dynamic linking of program segments

Disadvantages
Increased processor hardware costs Increased overhead for handling paging interrupts Increased software complexity to prevent thrashing
107

Virtual Memory (continued)

108

Cache Memory
Small high-speed intermediate memory unit Performance of computer system increased
Memory access time significantly reduced Faster processor access compared to main memory Stores frequently used data and instructions

Two levels of cache


L1: a pair of cache memory built into CPU; one to store instructions and the other to store data L2: Connected to CPU; contains copy of bus data

Data/instructions move from main memory to cache


Uses methods similar to paging algorithms (See page 96 for details)

109

Cache Memory (continued)

110

Cache Memory (continued)


Four cache memory design factors
Cache size, block size, block replacement algorithm, and rewrite policy

An optimal selection of cache size and replacement algorithm is necessary


May lead to 80-90% of all requests in cache

Efficiency measures
Cache hit ratio (h)
= # of requests found in cache / total # of requests * 100

Miss ratio (1-h) Average memory access time = AvgCacheAccessTime + AvgMemACCTime * (1-h)
Because CPU always checks cache memory first, if miss, then check main memory.

111

Address Types
A physical address or absolute address refers to a physical location in main memory. A logical address is a reference to a memory location independent of the current assignment of data to memory. (Stallings) Compilers produce code in which all memory references are expressed as logical addresses. A relative address is a kind of logical address in which the address is expressed relative to some known point in the program. Usually, the first location is address zero and all other addresses are offsets from this address.

Address Translation
Programs are loaded in main memory with all memory references in relative form Physical addresses are calculated on the fly as the instructions are executed. This process is called address translation, or address mapping. For adequate performance, the translation from relative to physical address must by done by hardware.

Hardware Address Translation in Partitioned Memory Systems


When a process is dispatched, a base register (in the CPU) is loaded with the physical start address of the process and a bounds register is loaded with the processs ending physical address. When a relative address is encountered, add it to the contents of the base register to get the actual physical address. The physical address is then compared to the bound register to make sure the process isnt referencing a location outside its own address space. The hardware mechanism provides translation and protection.

Address Translation in Paging


In a paged system, mapping is done on a page-by-page basis. In partitioned systems its done on a process-by-process basis. The hardware must be able to extract two pieces of data from the logical address:
a logical page number an offset within the page. (The offset is like a relative address within the page.)

Steps in Address Translation


Extract the logical page number from the logical address. Use the page to index into the page table and retrieve the frame number where the page is stored. Concatenate page number and offset to get physical address.

Address Translation
For this to work, the compiler must be able to assign logical addresses that can be used by the hardware to determine page number and offset. That is, the logical address should be a two-part value, (p,d), where p is the logical page number and d is an offset, or displacement, within the page.

Logical Addresses
This is easily done by requiring the size of a page (and the size of a frame) to be a power of two. Now, the relative address (defined relative to the origin of the program) and the logical address (p,d) will be the same. To try to understand this, consider an analogy to decimal numbering.

Address Translation Base 10 Example


Suppose a computer uses decimal (power of 10) addressing instead of binary. Let page & frame size be 100, with addresses in the range 00009999.
Frame 0 has start address 0000; frame 1 has start address 0100 and so on. Logical address 7324 can easily be divided into two parts: page number =73, offset=24. If logical page 73 is loaded into frame 1, logical address 7324 maps into physical address 0124.

CAUTION
The previous example was just an analogy there are no computers with power-of-ten addressing systems.

Address Translation
The decimal example works because the logical address can be easily divided into page-number and displacement fields, since each digit represents a power of 10. Binary addresses can also be divided into page-number and displacement fields, where each digit represents a power of 2.

Logical Address Example


Assume an addressing scheme that uses 16-bit addresses and has a page size of 1K (1024 = 210).
Well need 10 bits for the displacement. Now there are 6 bits left for the page number, so our hypothetical computer has 26 = 64 pages. A process 2700 bytes long will occupy three pages, with fragmentation of 372 bytes in the last page.

Example
Consider relative address 1502. It is located on logical page 1, at offset 478. (p, d) = 1,478. The binary representation of 1502 is 0000010111011110. Divide this into a six-bit page number field: 000001 = 1 and a 10-bit displacement field: 0111011110 = 478. When the MM hardware is presented with a binary address it can easily get the two fields.

Generalized Address Translation


Page size is determined by the hardware. It is typically no smaller than 512 bytes and may be as large as 8K. In general, an n + m bit address represents an n-bit page number field and an m-bit offset (displacement) field. The MMU (Memory Management Unit) of the hardware can be designed to implement address translation automatically.

Generalized Address Translation


Extract the leftmost n bits of the relative address to get the page number, p. Use the page number as an index into the page table and get the frame number, k, which has start address k x 2m. (rightmost m bits are zero) The physical address is now k x 2m + d. The physical address can be obtained directly by concatenating the frame number and the offset - no need to multiply and add.

Paged Address Translation


(Notice that only the page# field changes)

Page Table Register (PTR)


How does the MMU know where to find the page table for a given process? Most architectures have a special register for this purpose. When a process is dispatched (put into the Run state), the start address of its page table is loaded into the PTR as part of the process switch.

Simple Segmentation
Segmentation also divides the process address space into chunks, now called segments. Segments are based on logical, not physical, characteristics of the program. One segment might contain the main program, others a group of related functions, etc. Each segment can be a different size.

Address Translation in Segmentation


Similar to simple paging - for example, there is a segment table, which specifies
start address of each segment segment length

The hardware uses the segment number as an index into the segment table. Segment start address + displacement = physical address.

Address Translation Remarks (mostly about paging)


Address translation is transparent to the user. The OS is responsible for finding enough free frames for a process and for creating and initializing the page table. Protection is provided by the page table. If the referenced page isnt part of the process address space, the PT entry will be null.

Memory Allocation in Paging


Paging dramatically simplifies allocation. The frame table is structure that many operating systems use to support allocation. The frame table has an entry for each frame in memory. The entry tells whether the corresponding frame is empty or allocated, and if allocated, to which process.

Summary
Paged memory allocation
Efficient use of memory Allocate jobs in noncontiguous memory locations Problems Increased overhead Internal fragmentation

Demand paging scheme


Eliminates physical memory size constraint LRU provides slightly better efficiency (compared to FIFO)

Segmented memory allocation scheme


Solves internal fragmentation problem

132

Summary (continued)
Segmented/demand paged memory
Problems solved Compaction, external fragmentation, secondary storage handling

Associative memory
Used to speed up the process

Virtual memory
Programs execute if not stored entirely in memory Jobs size no longer restricted to main memory size

Cache memory
CPU can execute instruction faster

133

134

You might also like