0% found this document useful (0 votes)
2 views

Memory Management

Memory management is a critical function of operating systems that oversees the allocation and tracking of primary memory for processes. It aims to maximize CPU and memory utilization while ensuring security and minimizing response times. Key concepts include address binding, paging, segmentation, and techniques such as swapping and overlays to efficiently manage memory resources.

Uploaded by

Marium naeem
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Memory Management

Memory management is a critical function of operating systems that oversees the allocation and tracking of primary memory for processes. It aims to maximize CPU and memory utilization while ensuring security and minimizing response times. Key concepts include address binding, paging, segmentation, and techniques such as swapping and overlays to efficiently manage memory resources.

Uploaded by

Marium naeem
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

Memory

Managem Chap#8
Operating System

ent
Memory management

• Memory management is the functionality of an


operating system which handles or manages
primary memory and moves processes back and
forth between main memory and disk during
execution.
Functionality
• Memory management keeps track of each and every memory
location either it is allocated to some process or it is free.
• It checks how much memory is to be allocated to processes.
• It decides which process will get memory at what time.
• It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
Goals
• Maximum CPU utilization
• Minimize response time
• Maximum memory utilization
• Prioritize important processes
• Security to user program and operating system
Memory Management

Memory consists of a large array of bytes, each with its own address. The CPU
fetches instructions from memory according to the value of the program
counter. These instructions may cause additional loading from and storing to
specific memory addresses.
Memory Management Types:
• Basic Hardware
• Logical and physical addresses
• Dynamic Linking and shared libraries
Basic Hardware

• Main memory and the registers built into the processor itself are the
only general-purpose storage that the CPU can access directly.
• If the data are not in memory, they must be moved there before the
CPU can operate on them.
• The cache is to add fast memory between the CPU and main memory,
typically on the CPU chip for fast access.
Basic Hardware

• We first need to make


sure that each process
has a separate memory
space.
• To separate memory
spaces, we need the
ability to determine the
range of legal
addresses.
• We can provide this
protection by using two
registers, usually a base
and a limit.
Basic
Hardware
• Any attempt by a program
executing in user mode to
access operating-system
memory or other users’
memory results in a trap
to the operating system.
Address Binding

• Usually, a program resides on a disk as a binary executable file. To be


executed, the program must be brought into memory and placed
within a process.
• Addresses in the source program are generally symbolic (such as the
variable count). A compiler typically binds these symbolic addresses
to relocatable addresses.
• Each binding is a mapping from one address space to another.
Different types of address
binding
• Compile time. If you know at compile time where the process will
reside in memory, then absolute code can be generated.
• Load time. If it is not known at compile time where the process
will reside in memory, then the compiler must generate
relocatable code. In this case, final binding is delayed until load
time.
• Execution time. If the process can be moved during its execution
from one memory segment to another, then binding must be
delayed until run time.
Logical vs Physical Address Space
• An address generated by the
CPU is commonly referred to as
a logical address,
• whereas an address seen by the
memory unit—that is, the one
loaded into the memory-address
register of the memory—is
commonly referred to as a
physical address.
Logical vs Physical Address
Space
• A logical address space that is bound to a separate physical address space
– Logical address – generated by the CPU; also referred to as virtual address.
• Physical Address Space: An address seen by the memory unit (i.e the one loaded
into the memory address register of the memory) is commonly known as a
“Physical Address”.
• A Physical address is also known as a Real address. The set of all physical
addresses corresponding to these logical addresses is known as Physical address
space. A physical address is computed by MMU. The run-time mapping from
virtual to physical addresses is done by a hardware device Memory Management
Unit(MMU). The physical address always remains constant.
Dynamic Linking and Shared
Libraries

• Dynamically linked libraries are system libraries that are linked to user programs when
the programs are run.
• It is similar to dynamic loading. Here, though, linking, rather than loading, is
postponed until execution time.
• With dynamic linking, a stub is included in the image for each library routine
reference. The stub is a small piece of code that indicates how to locate the
appropriate memory-resident library routine or how to load the library if the routine
is not already present.
Dynamic Linking and Shared
Libraries
• When the stub is executed, it checks to see whether the needed routine is already in memory. If it is
not, the program loads the routine into memory.
• All processes that use a language library execute only one copy of the library code.
Dynamic Linking and Shared
Libraries

• This feature can be extended to library updates (such as bug fixes). A library
may be replaced by a new version, and all programs that reference the
library will automatically use the new version. Without dynamic linking,
Main Memory programs would need to be relinked to gain access to the
new library.
• Thus, only programs that are compiled with the new library version are
affected by any incompatible changes incorporated in it. Other programs
linked before the new library was installed will continue using the older
library. This system is also known as shared libraries
• Consider a scenario where multiple
programs use a common math library.
With static linking, each program would

Example contain its own copy of the library,


potentially leading to larger file sizes.
With dynamic linking, all programs can
use the same shared instance of the
library, saving disk space
Swapping
Swapping
• Swapping is a mechanism in which a process can be swapped
temporarily out of main memory (or move) to secondary storage
(disk) and make that memory available to other processes. At some
later time, the system swaps back the process from the secondary
storage to main memory.
• Though performance is usually affected by swapping process but it
helps in running multiple and big processes in parallel and that's the
reason Swapping is also known as a technique for memory
compaction.
Swapping
Overlays
• In memory management, overlays refer to a technique used to
manage memory efficiently by overlaying a portion of memory
with another program or data.
• The concept of overlays is that whenever a process is running
it will not use the complete program at the same time, it will
use only some part of it. Then overlays concept says that
whatever part you required, you load it and once the part is
done, then you just unload it, means just pull it back and get
the new part you required and run it.
• “The process of transferring a
block of program code or
other data into internal memory,
replacing what is already stored”.
Advantages of using overlays

• Increased memory utilization: Overlays allow multiple programs to


share the same physical memory space, increasing memory utilization
and reducing the need for additional memory.
• Reduced load time.
• Reduce memory requirement
• Reduce time requirement
Disadvantages of using overlays

• Complexity: Overlays can be complex to implement and manage, especially for


large programs.
• Performance overhead: The process of loading and unloading overlays can result
in increased CPU and disk usage, which can slow down performance.
• Compatibility issues: Overlays may not work on all hardware and software
configurations, making it difficult to ensure compatibility across different systems.
• Overlap map must be specified by programmer
• Programmer must know memory requirement
• Programming design of overlays structure is complex and not possible in all cases
Memory Protection
• When the CPU scheduler selects a process for
execution, the dispatcher loads the relocation
and limit registers with the correct values as
part of the context switch.
• we do not want to keep the code and data in
memory, as we might be able to use that space
for other purposes. Such code is sometimes
called transient operating-system code; it
comes and goes as needed. Thus, using this
code changes the size of the operating system
during program execution
Contiguous Memory Allocation
• The memory is usually divided into two partitions: one for the
resident operating system and one for the user processes.
• We can place the operating system in either low memory or high
memory. The major factor affecting this decision is the location of the
interrupt vector.
• Since the interrupt vector is often in low memory, programmers
usually place the operating system in low memory as well.
Memory Allocation
• One of the simplest methods for allocating memory is to divide
memory into several fixed-sized partitions. Each partition may contain
exactly one process. Thus, the degree of multiprogramming is bound
by the number of partitions.
• In the variable-partition scheme, the operating system keeps a table
indicating which parts of memory are available and which are
occupied. Initially, all memory is available for user processes and is
considered one large block of available memory, a hole. Eventually, as
you will see, memory contains a set of holes of various sizes.
Cont...
• Each partition contains exactly one process.
• Contiguous Technique can be divided into:
1. Fixed (or static) partitioning
2. Variable (or dynamic) partitioning
Fixed (or static) partitioning

• Each process in this method of contiguous memory allocation is given


a fixed size continuous block in the main memory.
• This means that the entire memory will be partitioned into continuous
blocks of fixed size, and each time a process enters the system, it will
be given one of the available blocks. Because each process receives a
block of memory space that is the same size, regardless of the size of
the process.
• Static partitioning is another name for this approach.
Advantages
• Easy to implement
• It is simple to keep track of how many memory
blocks are still available, which determines how
many further processes can be allocated memory.

Cont... Disadvantage
• Internal fragmentation
• Limit on number of processes./ Degree of
multiprogramming is less
• Limit on size of processes.
• External fragmentation
Variable Size Partitioning

• Depending on the needs of each process, space is allocated. No specific


fixed-size block is present.
• This scheme is also known as Dynamic partitioning and is
came into existence to overcome the drawback i.e internal
fragmentation that is caused by Static partitioning. In this
partitioning, scheme allocation is done dynamically.
• The size of the partition is not declared initially. Whenever
any process arrives, a partition of size equal to the size of
the process is created and then allocated to the process.
Thus the size of each partition is equal to the size of the
process.
• As partition size varies according to the need of the process
so in this partition scheme there is no internal
fragmentation.
Advantages of Variable-size
Partition

Degree of
No Internal No Limitation on
Multiprogramming
Fragmentation the Size of Process
is Dynamic
Disadvanta
ges of • External Fragmentation
Variable- • Difficult Implementation
size
Partition
Memory Allocation
• This procedure is a particular instance of the general dynamic storage
allocation problem, which concerns how to satisfy a request of size n
from a list of free holes.
• There are many solutions to this problem
• first-fit,
• best-fit, and
• worst-fit
Fragmentation
• External fragmentation exists when there is enough total memory
space to satisfy a request but the available spaces are not contiguous.
• break the physical memory into fixed-sized blocks and allocate
memory in units based on block size. With this approach, the memory
allocated to a process may be slightly larger than the requested
memory. The difference between these two numbers is internal
fragmentation.
• One solution to the problem of external fragmentation is
compaction and noncontiguous allocation (paging and
segmentation).
Paging and Segmentation
Paging
• Paging permits the physical
address space of a process to be
noncontiguous.
• paging avoids external
fragmentation.
• The basic method for
implementing paging involves
breaking physical memory into
fixed-sized blocks called frames
and breaking logical memory
into blocks of the same size
called pages.
Paging

• Every address generated


by the CPU is divided into
two parts: a page number
(p) and a page offset (d).
Paging
with TLB
Effective memory-access time

• for example, means that we find the desired page number in the TLB 80 percent of
the time. If it takes 100 nanoseconds to access memory, then a mapped-memory
access takes 100 nanoseconds when the page number is in the TLB. If we fail to find
the page number in the TLB then we must first access memory for the page table and
frame number (100 nanoseconds) and then access the desired byte in memory (100
nanoseconds), for a total of 200 nanoseconds. (We are assuming that a page-table
lookup takes only one memory access, but it can take more, as we shall see.) To find
the effective memory-access time, we weight the case by its probability:
effective access time = 0.80 × 100 + 0.20 × 200
= 120 nanoseconds
Hierarchi
cal
Paging
Hashed
Page
Tables
Inverted
Page
Tables
Example: Intel
32 and 64-bit
Architectures
Segmentation

• Segmentation is a memory-management
scheme that supports this programmer
view of memory.
• A logical address space is a collection of
segments.
Thank you

You might also like