Memory Management
Memory Management
Managem Chap#8
Operating System
ent
Memory management
Memory consists of a large array of bytes, each with its own address. The CPU
fetches instructions from memory according to the value of the program
counter. These instructions may cause additional loading from and storing to
specific memory addresses.
Memory Management Types:
• Basic Hardware
• Logical and physical addresses
• Dynamic Linking and shared libraries
Basic Hardware
• Main memory and the registers built into the processor itself are the
only general-purpose storage that the CPU can access directly.
• If the data are not in memory, they must be moved there before the
CPU can operate on them.
• The cache is to add fast memory between the CPU and main memory,
typically on the CPU chip for fast access.
Basic Hardware
• Dynamically linked libraries are system libraries that are linked to user programs when
the programs are run.
• It is similar to dynamic loading. Here, though, linking, rather than loading, is
postponed until execution time.
• With dynamic linking, a stub is included in the image for each library routine
reference. The stub is a small piece of code that indicates how to locate the
appropriate memory-resident library routine or how to load the library if the routine
is not already present.
Dynamic Linking and Shared
Libraries
• When the stub is executed, it checks to see whether the needed routine is already in memory. If it is
not, the program loads the routine into memory.
• All processes that use a language library execute only one copy of the library code.
Dynamic Linking and Shared
Libraries
• This feature can be extended to library updates (such as bug fixes). A library
may be replaced by a new version, and all programs that reference the
library will automatically use the new version. Without dynamic linking,
Main Memory programs would need to be relinked to gain access to the
new library.
• Thus, only programs that are compiled with the new library version are
affected by any incompatible changes incorporated in it. Other programs
linked before the new library was installed will continue using the older
library. This system is also known as shared libraries
• Consider a scenario where multiple
programs use a common math library.
With static linking, each program would
Cont... Disadvantage
• Internal fragmentation
• Limit on number of processes./ Degree of
multiprogramming is less
• Limit on size of processes.
• External fragmentation
Variable Size Partitioning
Degree of
No Internal No Limitation on
Multiprogramming
Fragmentation the Size of Process
is Dynamic
Disadvanta
ges of • External Fragmentation
Variable- • Difficult Implementation
size
Partition
Memory Allocation
• This procedure is a particular instance of the general dynamic storage
allocation problem, which concerns how to satisfy a request of size n
from a list of free holes.
• There are many solutions to this problem
• first-fit,
• best-fit, and
• worst-fit
Fragmentation
• External fragmentation exists when there is enough total memory
space to satisfy a request but the available spaces are not contiguous.
• break the physical memory into fixed-sized blocks and allocate
memory in units based on block size. With this approach, the memory
allocated to a process may be slightly larger than the requested
memory. The difference between these two numbers is internal
fragmentation.
• One solution to the problem of external fragmentation is
compaction and noncontiguous allocation (paging and
segmentation).
Paging and Segmentation
Paging
• Paging permits the physical
address space of a process to be
noncontiguous.
• paging avoids external
fragmentation.
• The basic method for
implementing paging involves
breaking physical memory into
fixed-sized blocks called frames
and breaking logical memory
into blocks of the same size
called pages.
Paging
• for example, means that we find the desired page number in the TLB 80 percent of
the time. If it takes 100 nanoseconds to access memory, then a mapped-memory
access takes 100 nanoseconds when the page number is in the TLB. If we fail to find
the page number in the TLB then we must first access memory for the page table and
frame number (100 nanoseconds) and then access the desired byte in memory (100
nanoseconds), for a total of 200 nanoseconds. (We are assuming that a page-table
lookup takes only one memory access, but it can take more, as we shall see.) To find
the effective memory-access time, we weight the case by its probability:
effective access time = 0.80 × 100 + 0.20 × 200
= 120 nanoseconds
Hierarchi
cal
Paging
Hashed
Page
Tables
Inverted
Page
Tables
Example: Intel
32 and 64-bit
Architectures
Segmentation
• Segmentation is a memory-management
scheme that supports this programmer
view of memory.
• A logical address space is a collection of
segments.
Thank you