0% found this document useful (0 votes)
135 views

Memory Management: Process Address Space

Memory management handles primary memory and moves processes between main memory and disk during execution. It tracks allocated and free memory locations and decides which processes get memory and when. There are three types of addresses - symbolic used in source code, relative used after compilation, and physical used after loading. Virtual addresses differ from physical addresses with execution-time binding using a memory management unit. Static loading loads everything at start while dynamic loading loads routines on demand. Contiguous allocation assigns single contiguous blocks but can cause fragmentation while noncontiguous allocation has overhead of address translation.

Uploaded by

yashika sarode
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views

Memory Management: Process Address Space

Memory management handles primary memory and moves processes between main memory and disk during execution. It tracks allocated and free memory locations and decides which processes get memory and when. There are three types of addresses - symbolic used in source code, relative used after compilation, and physical used after loading. Virtual addresses differ from physical addresses with execution-time binding using a memory management unit. Static loading loads everything at start while dynamic loading loads routines on demand. Contiguous allocation assigns single contiguous blocks but can cause fragmentation while noncontiguous allocation has overhead of address translation.

Uploaded by

yashika sarode
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

MEMORY MANAGEMENT

Memory management is the functionality of an operating system which handles or manages primary
memory and moves processes back and forth between main memory and disk during execution. Memory
management keeps track of each and every memory location, regardless of either it is allocated to some
process or it is free. It checks how much memory is to be allocated to processes. It decides which process
will get memory at what time. It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.
This tutorial will teach you basic concepts related to Memory Management.

Process Address Space


The process address space is the set of logical addresses that a process references in its code. For example,
when 32-bit addressing is in use, addresses can range from 0 to 0x7fffffff; that is, 2^31 possible numbers,
for a total theoretical size of 2 gigabytes.
The operating system takes care of mapping the logical addresses to physical addresses at the time of
memory allocation to the program. There are three types of addresses used in a program before and after
memory is allocated −

S.N. Memory Addresses & Description

1 Symbolic addresses
The addresses used in a source code. The variable names, constants, and
instruction labels are the basic elements of the symbolic address space.

2 Relative addresses
At the time of compilation, a compiler converts symbolic addresses into
relative addresses.

3 Physical addresses
The loader generates these addresses at the time when a program is
loaded into main memory.

Virtual and physical addresses are the same in compile-time and load-time address-binding schemes.
Virtual and physical addresses differ in execution-time address-binding scheme.
The set of all logical addresses generated by a program is referred to as a logical address space. The set of
all physical addresses corresponding to these logical addresses is referred to as a physical address space.
The runtime mapping from virtual to physical address is done by the memory management unit (MMU)
which is a hardware device. MMU uses following mechanism to convert virtual address to physical
address.

 The value in the base register is added to every address generated by a user process, which is treated
as offset at the time it is sent to memory. For example, if the base register value is 10000, then an
attempt by the user to use address location 100 will be dynamically reallocated to location 10100.

 The user program deals with virtual addresses; it never sees the real physical addresses.
Static vs Dynamic Loading
The choice between Static or Dynamic Loading is to be made at the time of computer program being
developed. If you have to load your program statically, then at the time of compilation, the complete
programs will be compiled and linked without leaving any external program or module dependency. The
linker combines the object program with other necessary object modules into an absolute program, which
also includes logical addresses.
If you are writing a Dynamically loaded program, then your compiler will compile the program and for all
the modules which you want to include dynamically, only references will be provided and rest of the work
will be done at the time of execution.
At the time of loading, with static loading, the absolute program (and data) is loaded into memory in order
for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored on a disk in relocatable form
and are loaded into memory only when they are needed by the program.

Static vs Dynamic Linking


As explained above, when static linking is used, the linker combines all other modules needed by a
program into a single executable program to avoid any runtime dependency.
When dynamic linking is used, it is not required to link the actual module or library with the program,
rather a reference to the dynamic module is provided at the time of compilation and linking. Dynamic Link
Libraries (DLL) in Windows and Shared Objects in Unix are good examples of dynamic libraries.

Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move)
to secondary storage (disk) and make that memory available to other processes. At some later time, the
system swaps back the process from the secondary storage to main memory.
Though performance is usually affected by swapping process but it helps in running multiple and big
processes in parallel and that's the reason Swapping is also known as a technique for memory
compaction.
Logical Address is address generated by CPU while a program is running . Logical address is also
called virtual address as it does not exist physically .The set of all logical address space is called logical
address space. Logical address is mapped to corresponding physical address using MMU (Memory
Management Unit) .
Compile time and load time address binding produces same logical and physical address ,
while run time address binding produces different .
Physical Address is the actual address location in memory unit. This is computed by MMU . User can
access physical address in the memory unit using the corresponding logical address.
Contiguous Memory Allocation

The main memory must accommodate both the operating system and the various user processes.
We therefore need to allocate different parts of the main memory in the most efficient way
possible.

The memory is usually divided into two partitions: one for the resident operating system, and one
for the user processes. We may place the operating system in either low memeory or high
memory. With this approach each process is contained in a single contiguous section of memory.

One of the simplest methods for memory allocation is to divide memory into several fixed-sized
partitions. Each partition may contain exactly one process. In this multiple-partition method,
when a partition is free, a process is selected from the input queue and is loaded into the free
partition. When the process terminates, the partition becomes available for another process. The
operating system keeps a table indicating which parts of memory are available and which are
occupied. Finally, when a process arrives and needs memory, a memory section large enough for
this process is provided.

Note: Depending on the method used, this approach can suffer from external as well as internal
memory fragmentation.

CONTIGUOUS
BASIS THE NONCONTIGUOUS MEMORY
MEMORY
COMPARISON ALLOCATION
ALLOCATION

Basic Allocates consecutive Allocates separate blocks of

blocks of memory to a memory to a process.

process.

Overheads Contiguous memory Noncontiguous memory allocation

allocation does not have the has overhead of address translation

overhead of address while execution of a process.

translation while execution

of a process.
Contiguous Memory Allocation

Execution rate A process executes fatser in A process executes quite slower

contiguous memory comparatively in noncontiguous

allocation memory allocation.

Solution The memory space must be Divide the process into several

divided into the fixed-sized blocks and place them in different

partition and each partition parts of the memory according to

is allocated to a single the availability of memory space

process only. available.

Table A table is maintained by A table has to be maintained for

operating system which each process that carries the base

maintains the list of addresses of each block which has

available and occupied been acquired by a process in

partition in the memory memory.

space

Contiguous Memory Allocation

The operating system and the user’s processes both must be accommodated in the main memory. Hence the
main memory is divided into two partitions: at one partition the operating system resides and at other the
user processes reside. In usual conditions, the several user processes must reside in the memory at the same
time, and therefore, it is important to consider the allocation of memory to the processes.

The Contiguous memory allocation is one of the methods of memory allocation. In contiguous memory
allocation, when a process requests for the memory, a single contiguous section of memory blocks is
assigned to the process according to its requirement.
The contiguous memory allocation can be achieved by dividing the
memory into the fixed-sized partition and allocate each partition to a single process only. But this will
cause the degree of multiprogramming, bounding to the number of fixed partition done in the memory. The
contiguous memory allocation also leads to the internal fragmentation. Like, if a fixed sized memory
block allocated to a process is slightly larger than its requirement then the left over memory space in the
block is called internal fragmentation. When the process residing in the partition terminates the partition
becomes available for the another process.

In the variable partitioning scheme, the operating system maintains a tablewhich indicates, which partition
of the memory is free and which occupied by the processes. The contiguous memory allocation fastens the
execution of a process by reducing the overheads of address translation.

Non-Contiguous Memory Allocation

The Non-contiguous memory allocation allows a process to acquire the several memory blocks at the
different location in the memory according to its requirement. The noncontiguous memory allocation
also reduces the memory wastage caused due to internal and external fragmentation. As it utilizes the
memory holes, created during internal and external fragmentation.

BASIS FOR EXTERNAL


INTERNAL FRAGMENTATION
COMPARISON FRAGMENTATION

Basic It occurs when fixed sized It occurs when variable size

memory blocks are allocated to memory space are allocated


BASIS FOR EXTERNAL
INTERNAL FRAGMENTATION
COMPARISON FRAGMENTATION

the processes. to the processes

dynamically.

Occurrence When the memory assigned to When the process is

the process is slightly larger than removed from the memory,

the memory requested by the it creates the free space in

process this creates free space in the memory causing

the allocated block causing external fragmentation.

internal fragmentation.

Solution The memory must be partitioned Compaction, paging and

into variable sized blocks and segmentation.

assign the best fit block to the

process.

Internal Fragmentation

When a program is allocated to a memory block, if that program is lesser than this memory block
and remaining space goes wasted, this situation is called internal fragmentation. Generally,
internal fragmentation memory partition is static or fixed.

External Fragmentation
When total memory is enough available to a process but can not be allocated because of memory
blocks are very small. That means program size is more than any available memory hole.
Generally, external fragmentation occurs in dynamic or variable size partitions. External
fragmentation can be solved using compaction technique. Also external fragmentation can be
prevented by paging or segmentation mechanisms.

Paging and segmentation are the two ways which allow a process’s physical address space to be non-
contiguous. In non-contiguous memory allocation, the process is divided into blocks (pages or segments)
which are placed into the different area of memory space according to the availability of the memory.

The noncontiguous memory allocation has an advantage of reducing memory wastage but, but
it increases the overheads of address translation. As the parts of the process are placed in a different
location in memory, it slows the execution of the memory because time is consumed in address translation.

Here, the operating system needs to maintain the table for each process which contains the base address of
the each block which is acquired by the process in memory space.

In operating system, following are four common memory management techniques.

Single contiguous allocation: Simplest allocation method used by MS-DOS.


All memory (except some reserved for OS) is available to
a process.

Partitioned allocation: Memory is divided in different blocks

Paged memory management: Memory is divided in fixed sized units called


page frames, used in virtual memory environment.

Segmented memory management: Memory is divided in different segments (a


segment is logical grouping of process' data or code)
In this management, allocated memory does'nt have to
be contiguous.
Most of the operating systems (for example Windows and Linux) use Segmentation with Paging. A process
is divided in segments and individual segments have pages.
In Partition Allocation, when there are more than one partition freely available to accommodate a process’s
request, a partition must be selected. To choose a particular partition, a partition allocation method is
needed. A partition allocation method is considered better if it avoids internal fragmentation.

Below are the various partition allocation schemes :

1. First Fit: In the first fit, partition is allocated which is first


sufficient from the top of Main Memory.

2. Best Fit Allocate the process to the partition which is first


smallest sufficient partition among the free available partition.

3. Worst Fit Allocate the process to the partition which is largest


sufficient among the freely available partitions available in
the main memory.

4. Next Fit Next fit is similar to the first fit but it will search
for the first sufficient partition from the last allocation point.

Although, best fit minimizes the wastage space, it consumes a lot of processor time for searching the block
which is close to required size. Also, Best-fit may perform poorer than other algorithms in some cases. For
example, see below exercise.

NUMERICAL

Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in order), how would the first-fit,
best-fit, and worst-fit algorithms place
processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)? Which algorithm makes the most efficient
use of memory?

First-fit:
212K is put in 500K partition
417K is put in 600K partition
112K is put in 288K partition (new partition 288K = 500K - 212K)
426K must wait

Best-fit:
212K is put in 300K partition
417K is put in 500K partition
112K is put in 200K partition
426K is put in 600K partition

Worst-fit:
212K is put in 600K partition
417K is put in 500K partition
112K is put in 388K partition
426K must wait
In this example, best-fit turns out to be the best.

PAGING

Paging is a memory management scheme that eliminates the need for contiguous allocation of physical
memory. This scheme permits the physical address space of a process to be non – contiguous.
 Logical Address or Virtual Address (represented in bits): An address generated by the CPU
 Logical Address Space or Virtual Address Space( represented in words or bytes): The set of all logical
addresses generated by a program
 Physical Address (represented in bits): An address actually available on memory unit
 Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses

The mapping from virtual to physical address is done by the memory management unit (MMU) which is a
hardware device and this mapping is known as paging technique.
 The Physical Address Space is conceptually divided into a number of fixed-size blocks, called frames.
 The Logical address Space is also splitted into fixed-size blocks, called pages.
 Page Size = Frame Size

 Page number(p): Number of bits required to represent the pages in Logical Address Space or Page
number
 Page offset(d): Number of bits required to represent particular word in a page or page size of Logical
Address Space or word number of a page or page offset.
Physical Address is divided into
 Frame number(f): Number of bits required to represent the frame of Physical Address Space or Frame
number.
 Frame offset(d): Number of bits required to represent particular word in a frame or frame size of
Physical Address Space or word number of a frame or frame offset.

The hardware implementation of page table can be done by using dedicated registers. But the usage of
register for the page table is satisfactory only if page table is small. If page table contain large number of
entries then we can use TLB(translation Look-aside buffer), a special, small, fast look up hardware cache.
 The TLB is associative, high speed memory.
 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags simultaneously.If the item is found,
then corresponding value is returned.
Cache Memory
As CPU has to fetch instruction from main memory speed of CPU depending on fetching speed from main
memory. CPU contains register which has fastest access but they are limited in number as well as costly.
Cache is cheaper so we can access cache. Cache memory is a very high speed memory that is placed
between the CPU and main memory, to operate at the speed of the CPU.

Levels of memory –

 Level 1 or Register – It is a type of memory in which data is stored and accepted that are immediately
stored in CPU. Most commonly used register is accumulator, Program counter, address register etc.
 Level 2 or Cache memory – It is the fastest memory which has faster access time where data is
temporarily stored for faster access.
 Level 3 or Main Memory – It is memory on which computer works currently it is small in size and
once power is off data no longer stays in this memory
 Level 4 or Secondary Memory – It is external memory which is not fast as main memory but data
stays permanently in this memory
Cache Performance
 If the processor finds that the memory location is in the cache, a cache hit has occurred and data is read
from chache
 If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache
miss, the cache allocates a new entry and copies in data from main memory, then the request is fulfilled
from the contents of the cache.
The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses

Virtual Memory
Virtual Memory is a space where large programs can store themselves in form of pages while their
execution and only the required pages or portions of processes are loaded into the main memory. This
technique is useful as large virtual memory is provided for user programs when a very small physical
memory is there.
In real scenarios, most processes never need all their pages at once, for following reasons :

 Error handling code is not needed unless that specific error occurs, some of which are quite rare.
 Arrays are often over-sized for worst-case scenarios, and only a small fraction of the arrays are actually
used in practice.
 Certain features of certain programs are rarely used.

Benefits of having Virtual Memory

1. Large programs can be written, as virtual space available is huge compared to physical memory.
2. Less I/O required, leads to faster and easy swapping of processes.
3. More physical memory available, as programs are stored on virtual memory, so they occupy very less
space on actual physical memory.

Demand Paging
The basic idea behind demand paging is that when a process is swapped in, its pages are not swapped in all
at once. Rather they are swapped in only when the process needs them(On demand). This is termed as lazy
swapper, although a pager is a more accurate term.
Initially only those pages are loaded which will be required the process immediately.
The pages that are not moved into the memory, are marked as invalid in the page table. For an invalid entry
the rest of the table is empty. In case of pages that are loaded in the memory, they are marked as valid along
with the information about where to find the swapped out page.
When the process requires any of the page that is not loaded into the memory, a page fault trap is triggered
and following steps are followed,

1. The memory address which is requested by the process is first checked, to verify the request made by the
process.
2. If its found to be invalid, the process is terminated.
3. In case the request by the process is valid, a free frame is located, possibly from a free-frame list, where
the required page will be moved.
4. A new operation is scheduled to move the necessary page from disk to the specified memory location. (
This will usually block the process on an I/O wait, allowing some other process to use the CPU in the
meantime. )
5. When the I/O operation is complete, the process's page table is updated with the new frame number, and
the invalid bit is changed to valid.
6. The instruction that caused the page fault must now be restarted from the beginning.

There are cases when no pages are loaded into the memory initially, pages are only loaded when demanded
by the process by generating page faults. This is called Pure Demand Paging.
The only major issue with Demand Paging is, after a new page is loaded, the process starts execution from
the beginning. Its is not a big issue for small programs, but for larger programs it affects performance
drastically.
Page Replacement
As studied in Demand Paging, only certain pages of a process are loaded initially into the memory. This
allows us to get more number of processes into the memory at the same time. but what happens when a
process requests for more pages and no free memory is available to bring them in. Following steps can be
taken to deal with this problem :

1. Put the process in the wait queue, until any other process finishes its execution thereby freeing frames.
2. Or, remove some other process completely from the memory to free frames.
3. Or, find some pages that are not being used right now, move them to the disk to get free frames. This
technique is called Page replacement and is most commonly used. We have some great algorithms to
carry on page replacement efficiently.

Basic Page Replacement

 Find the location of the page requested by ongoing process on the disk.
 Find a free frame. If there is a free frame, use it. If there is no free frame, use a page-replacement
algorithm to select any existing frame to be replaced, such frame is known as victim frame.
 Write the victim frame to disk. Change all related page tables to indicate that this page is no longer in
memory.
 Move the required page and store it in the frame. Adjust all related page and frame tables to indicate the
change.
 Restart the process that was waiting for this page.

.Demand Paging
A demand paging system is quite similar to a paging system with swapping where processes reside in
secondary memory and pages are loaded only on demand, not in advance. When a context switch occurs,
the operating system does not copy any of the old program’s pages out to the disk or any of the new
program’s pages into the main memory Instead, it just begins executing the new program after loading the
first page and fetches that program’s pages as they are referenced.
While executing a program, if the program references a page which is not available in the main memory
because it was swapped out a little ago, the processor treats this invalid memory reference as a page
fault and transfers control from the program to the operating system to demand the page back into the
memory.

Advantages
Following are the advantages of Demand Paging −

 Large virtual memory.


 More efficient use of memory.
 There is no limit on degree of multiprogramming.
Disadvantages
 Number of tables and the amount of processor overhead for handling page interrupts are greater than
in the case of the simple paged management techniques.

Page Replacement Algorithm


Page replacement algorithms are the techniques using which an Operating System decides which memory
pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a
page fault occurs and a free page cannot be used for allocation purpose accounting to reason that pages are
not available or the number of free pages is lower than required pages.
When the page that was selected for replacement and was paged out, is referenced again, it has to read in
from disk, and this requires for I/O completion. This process determines the quality of the page
replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages provided by
hardware, and tries to select which pages should be replaced to minimize the total number of page misses,
while balancing it with the costs of primary storage and processor time of the algorithm itself. There are
many different page replacement algorithms. We evaluate an algorithm by running it on a particular string
of memory reference and computing the number of page faults,

Reference String
The string of memory references is called reference string. Reference strings are generated artificially or by
tracing a given system and recording the address of each memory reference. The latter choice produces a
large number of data, where we note two things.

 For a given page size, we need to consider only the page number, not the entire address.
 If we have a reference to a page p, then any immediately following references to page p will never
cause a page fault. Page p will be in memory after the first reference; the immediately following
references will not fault.
 For example, consider the following sequence of addresses − 123,215,600,1234,76,96
 If page size is 100, then the reference string is 1,2,6,12,0,0

First In First Out (FIFO) algorithm


 Oldest page in main memory is the one which will be selected for replacement.
 Easy to implement, keep a list, replace pages from the tail and add new pages at the head.

Optimal Page algorithm


 An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An optimal
page-replacement algorithm exists, and has been called OPT or MIN.
 Replace the page that will not be used for the longest period of time. Use the time when a page is to
be used.
Least Recently Used (LRU) algorithm
 Page which has not been used for the longest time in main memory is the one which will be selected
for replacement.
 Easy to implement, keep a list, replace pages by looking back into time.

Page Buffering algorithm

 To get a process start quickly, keep a pool of free frames.


 On page fault, select a page to be replaced.
 Write the new page in the frame of free pool, mark the page table and restart the process.
 Now write the dirty page out of disk and place the frame holding replaced page in free pool.
Least frequently Used(LFU) algorithm
 The page with the smallest count is the one which will be selected for replacement.
 This algorithm suffers from the situation in which a page is used heavily during the initial phase of a
process, but then is never used again.

Most frequently Used(MFU) algorithm


 This algorithm is based on the argument that the page with the smallest count was probably just
brought in and has yet to be used
Thrashing
A process that is spending more time paging than executing is said to be thrashing. In other words it means,
that the process doesn't have enough frames to hold all the pages for its execution, so it is swapping pages in
and out very frequently to keep executing. Sometimes, the pages which will be required in the near future
have to be swapped out.
Initially when the CPU utilization is low, the process scheduling mechanism, to increase the level of
multiprogramming loads multiple processes into the memory at the same time, allocating a limited amount
of frames to each process. As the memory fills up, process starts to spend a lot of time for the required pages
to be swapped in, again leading to low CPU utilization because most of the proccesses are waiting for pages.
Hence the scheduler loads more processes to increase CPU utilization, as this continues at a point of time the
complete system comes to a stop.

To prevent thrashing we must provide processes with as many frames as they really need "right now"

SEGMENTATION
A Memory Management technique in which memory is divided into variable sized chunks which can be
allocated to processes. Each chunk is called a Segment. A table stores the information about all such
segments and is called Segment Table.
Segment Table – It maps two dimensional Logical address into one dimensional Physical address. It’s each
table entry has:
 Base Address: It contains the starting physical address where the segments reside in memory.
 Limit: It specifies the length of the segment.
Address generated by the CPU is divided into:
 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the size of the segment.
Advantages of Segmentation –
 No Internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation –
 As processes are loaded and removed from the memory, the free memory space is broken into little
pieces, causing External fragmentation.

You might also like