0% found this document useful (0 votes)
2 views

CH 3 OS Memory Managment

Memory management is crucial for effective utilization of limited main memory in computer systems, involving techniques like contiguous and non-contiguous memory allocation. It includes concepts such as logical and physical address spaces, paging, and garbage collection to manage memory efficiently. Demand paging allows for efficient memory use by loading only necessary pages, while page replacement algorithms help manage page faults when memory is full.

Uploaded by

hiranshi05112004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

CH 3 OS Memory Managment

Memory management is crucial for effective utilization of limited main memory in computer systems, involving techniques like contiguous and non-contiguous memory allocation. It includes concepts such as logical and physical address spaces, paging, and garbage collection to manage memory efficiently. Demand paging allows for efficient memory use by loading only necessary pages, while page replacement algorithms help manage page faults when memory is full.

Uploaded by

hiranshi05112004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

Memory Management

• What do you mean by memory management?


• Memory is the important part of the computer that is used to store the
data.
• Its management is critical to the computer system because the amount of
main memory available in a computer system is very limited.
• At any time, many processes are competing for it. Moreover, to increase
performance, several processes are executed simultaneously.
• For this, we must keep several processes in the main memory, so it is
even more important to manage them effectively.
• Memory manager is used to keep track of the status of memory locations,
whether it is free or allocated. It addresses primary memory by providing
abstractions so that software perceives a large memory is allocated to it.
Address Space

• In operating systems, address space refers to the range of memory


addresses that a process or system can use to store data and execute
code. It defines the boundaries within which a program can operate in
memory.
• Logical Address space:
• An address generated by the CPU is known as a
“Logical Address”.
• It is also known as a Virtual address.
• Logical address space can be defined as the size of the
process.
• A logical address can be changed.
• Physical Address space:
• An address seen by the memory unit (i.e the one
loaded into the memory address register of the
memory) is commonly known as a “Physical Address”.
• A Physical address is also known as a Real address.
• Refers to the actual addresses in the computer's physical memory
(RAM).
• The physical address always remains constant.
Memory Management Techniques:
Contiguous memory management
schemes:
• Contiguous memory allocation is basically a
method in which a single contiguous section/part
of memory is allocated to a process or file needing
it. Because of this all the available memory space
resides at the same place together, which means
that the freely/unused available memory partitions
are not distributed in a random fashion here and
there across the whole memory space
• Advantages of contiguous memory management
schemes:
• Simple to implement.
• Easy to manage and design.
• In a Single contiguous memory management scheme,
once a process is loaded, it is given full processor's
time, and no other processor will interrupt it.
• Disadvantages of contiguous memory
management schemes:
• Wastage of memory space due to unused memory as
the process is unlikely to use all the available memory
space.
• The CPU remains idle, waiting for the disk to load the
binary image into the main memory.
• It can not be executed if the program is too large to fit
the entire available main memory space.
• It does not support multiprogramming, i.e., it cannot
handle multiple programs simultaneously.
• Non-Contiguous memory management schemes:
• In a Non-Contiguous memory management scheme, the
program is divided into different blocks and loaded at
different portions of the memory that need not
necessarily be adjacent to one another.
• This scheme can be classified depending upon the size
of blocks and whether the blocks reside in the main
memory or not.
• Managing free space (Garbage collection):
• Garbage collection is a function of an operating system
or programming language that reclaims memory no
longer in use.
• For example, Java and .NET have built-in garbage
collection, but C and C++ do not, and programmers
have to write the code to allocate and deallocate, which
is tedious and error prone.
• Garbage collection (GC) is a dynamic approach to
automatic memory management and heap allocation
that processes and identifies dead memory blocks and
reallocates storage for reuse.
• The primary purpose of garbage collection is to reduce
memory leaks.
• GC implementation requires three primary approaches, as
follows:
• Mark-and-sweep - In process when memory runs out, the
GC locates all accessible memory and then reclaims
available memory.
• Reference counting - Allocated objects contain a reference
count of the referencing number. When the memory count
is zero, the object is garbage and is then destroyed. The
freed memory returns to the memory heap.
• Copy collection - There are two memory partitions. If the
first partition is full, the GC locates all accessible data
structures and copies them to the second partition,
compacting memory after GC process and allowing
continuous free memory.
Virtual memory
• Virtual Memory is a storage scheme that provides user an illusion
of having a very big main memory. This is done by treating a part
of secondary memory as the main memory.
• Instead of loading one big process in the main memory, the
Operating System loads the different parts of more than one
process in the main memory.
• By doing this, the degree of multiprogramming will be increased
and therefore, the CPU utilization will also be increased.
• A computer can address more memory than the amount
physically installed on the system. This extra memory is actually
called virtual memory and it is a section of a hard disk that's
set up to emulate the computer's RAM.
• How Virtual Memory Works?
• In modern word, virtual memory has become quite common
these days. In this scheme, whenever some pages needs to be
loaded in the main memory for the execution and the memory
is not available for those many pages, then in that case, instead
of stopping the pages from entering in the main memory, the
OS search for the RAM area that are least used in the recent
times or that are not referenced and copy that into the
secondary memory to make the space for the new pages in the
main memory.
• Since all this procedure happens automatically, therefore it
makes the computer feel like it is having the unlimited RAM.
Paging
• Lets consider a process P1 of size 2 MB and the main memory
which is divided into three partitions. Out of the three partitions,
two partitions are holes of size 1 MB each.
• P1 needs 2 MB space in the main memory to be loaded. We have
two holes of 1 MB each but they are not contiguous.
• Although, there is 2 MB space available in the main memory in the
form of those holes but that remains useless until it become
contiguous. This is a serious problem to address.
• We need to have some kind of mechanism which can store one
process at different locations of the memory.
• The Idea behind paging is to divide the process in pages so that, we
can store them in the memory at different holes.
• In Operating Systems, Paging is a storage mechanism used to
retrieve processes from the secondary storage into the main
memory in the form of pages.
• The main idea behind the paging is to divide each process in the
form of pages. The main memory will also be divided in the form
of frames.
• One page of the process is to be stored in one of the frames of the
memory. The pages can be stored at the different locations of the
memory but the priority is always to find the contiguous frames or
holes.
• Pages of the process are brought into the main memory only when
they are required otherwise they reside in the secondary storage.
• Different operating system defines different frame sizes. The sizes
of each frame must be equal. Considering the fact that the pages
are mapped to the frames in Paging, page size needs to be as
same as frame size.
• Example
• Let us consider the main memory size 16 Kb and Frame
size is 1 KB therefore the main memory will be divided
into the collection of 16 frames of 1 KB each.
• There are 4 processes in the system that is P1, P2, P3 and
P4 of 4 KB each. Each process is divided into pages of 1
KB each so that one page can be stored in one frame.
• Initially, all the frames are empty therefore pages of the
processes will get stored in the contiguous way.
• Frames, pages and the mapping between the two is
shown in the image below.
• Let us consider that, P2 and P4 are moved to waiting
state after some time. Now, 8 frames become empty
and therefore other pages can be loaded in that empty
place. The process P5 of size 8 KB (8 pages) is waiting
inside the ready queue.
• Given the fact that, we have 8 non contiguous frames
available in the memory and paging provides the
flexibility of storing the process at the different places.
Therefore, we can load the pages of process P5 in the
place of P2 and P4.
• Memory Management Unit
• The purpose of Memory Management Unit (MMU) is to
convert the logical address into the physical address.
The logical address is the address generated by the CPU
for every page while the physical address is the actual
address of the frame where each page will be stored.
• Page Table in OS
• Page Table is a data structure used by the virtual
memory system to store the mapping between logical
addresses and physical addresses.
• Logical addresses are generated by the CPU for the
pages of the processes therefore they are generally
used by the processes.
• Physical addresses are the actual frame address of the
memory. They are generally used by the hardware or
more specifically by RAM subsystems.
• Physical Address Space = M words
Logical Address Space = L words
Page Size = P words

Physical Address = log 2 M = m bits


Logical Address = log 2 L = l bits
page offset = log 2 P = p bits
• The CPU always accesses the processes through their logical
addresses. However, the main memory recognizes physical address
only.
• In this situation, a unit named as Memory Management Unit comes
into the picture. It converts the page number of the logical address to
the frame number of the physical address. The offset remains same
in both the addresses.
• To perform this task, Memory Management unit needs a special kind
of mapping which is done by page table. The page table stores all the
Frame numbers corresponding to the page numbers of the page
table.
• In other words, the page table maps the page number to its actual
location (frame number) in the memory.
• Size of the page table
• However, the part of the process which is being
executed by the CPU must be present in the main
memory during that time period. The page table must
also be present in the main memory all the time
because it has the entry for all the pages.
• The size of the page table depends upon the number of
entries in the table and the bytes stored in one entry.
• Let's consider,
1.Logical Address = 24 bits
2.Logical Address space = 2 ^ 24 bytes
3.Let's say, Page size = 4 KB = 2 ^ 12 Bytes
4.Page offset = 12
5.Number of bits in a page = Logical Address - Page Offset = 24
- 12 = 12 bits
6.Number of pages = 2 ^ 12 = 2 X 2 X 10 ^ 10 = 4 KB
7.Let's say, Page table entry = 1 Byte
8.Therefore, the size of the page table = 4 KB X 1 Byte = 4 KB
What is Page Fault in Operating
System?
• Page faults dominate more like an error. A page fault
will happen if a program tries to access a piece of
memory that does not exist in physical memory (main
memory). The fault specifies the operating system to
trace all data into virtual memory management and
then relocate it from secondary memory to its primary
memory, such as a hard disk.
Page Fault Terminology

• 1. Page Hit
• When the CPU attempts to obtain a needed page from main
memory and the page exists in main memory (RAM), it is
referred to as a "PAGE HIT".
• 2. Page Miss
• If the needed page has not existed in the main memory
(RAM), it is known as "PAGE MISS".
• 3. Page Fault Time
• The time it takes to get a page from secondary memory and
recover it from the main memory after loading the required
page is known as "PAGE FAULT TIME".
• 4. Page Fault Delay
• The rate at which threads locate page faults in memory
is referred to as the "PAGE FAULT RATE". The page
fault rate is measured per second.
• 5. Hard Page Fault
• If a required page exists in the hard disk's page file, it is
referred to as a "HARD PAGE FAULT".
• 6. Soft Page Fault
• If a required page is not located on the hard disk but is
found somewhere else in memory, it is referred to as
a "SOFT PAGE FAULT".
• 7. Minor Page Fault
• If a process needs data and that data exists in memory
but is being allotted to another process at the same
moment, it is referred to as a "MINOR PAGE FAULT".
Demand Paging
• Every process in the virtual memory contains lots of pages
and in some cases, it might not be efficient to swap all the
pages for the process at once.
• Because it might be possible that the program may need
only a certain page for the application to run.
• Let us take an example here, suppose there is a 500 MB
application and it may need as little as 100MB pages to be
swapped, so in this case, there is no need to swap all
pages at once.
• The demand paging system is somehow similar to the
paging system with swapping where processes mainly
reside in the main memory(usually in the hard disk).
• Thus demand paging is the process that solves the
above problem only by swapping the pages on Demand.
• This is also known as lazy swapper( It never swaps the
page into the memory unless it is needed).
• Swapper that deals with the individual pages of a
process are referred to as Pager.
• Demand Paging is a technique in which a page is
usually brought into the main memory only when it is
needed or demanded by the CPU.
• Initially, only those pages are loaded that are required
by the process immediately.
• Those pages that are never accessed are thus never
loaded into the physical memory.
Demand Paging
• Main Memory
• CPU
• Secondary Memory
• Interrupt
• Physical Address space
• Logical Address space
• Operating System
• Page Table
Advantages of Demand Paging

• The benefits of using the Demand Paging technique are as follows:


• With the help of Demand Paging, memory is utilized efficiently.
• Demand paging avoids External Fragmentation.
• Less Input/Output is needed for Demand Paging.
• This process is not constrained by the size of physical memory.
• With Demand Paging it becomes easier to share the pages.
• With this technique, portions of the process that are never
called are never loaded.
Disadvantages of Demand paging

• There is an increase in overheads due to interrupts and


page tables.
• Memory access time in demand paging is longer.
Page Replacement Algorithm
• Page Fault: A page fault happens when a running
program accesses a memory page that is mapped into
the virtual address space but not loaded in physical
memory.
• Since actual physical memory is much smaller than
virtual memory, page faults happen. In case of a page
fault, Operating System might have to replace one of
the existing pages with the newly needed page.
• Different page replacement algorithms suggest different
ways to decide which page to replace. The target for all
algorithms is to reduce the number of page faults.
First In First Out (FIFO)
• This algorithm is similar to the operations of the queue.
All the pages are stored in the queue in the order they
are allocated frames in the main memory. The one
which is allocated first stays in the front of the queue.
The one which is allocated the memory first is replaced
first. The one which is at the front of the queue is
removed at the time of replacement.
• Example: Consider the Pages referenced by the CPU in the
order are 6, 7, 8, 9, 6, 7, 1, 6, 7, 8, 9, 1
•Let there are 3 frames in the memory.
•6, 7, 8 are allocated to the vacant slots as they are not in
memory.
•When 9 comes page fault occurs, it replaces 6 which is
the oldest in memory or front element of the queue.
•Then 6 comes (Page Fault), it replaces 7 which is the
oldest page in memory now.
•Similarly, 7 replaces 8, 1 replaces 9.
•Then 6 comes which is already in memory (Page Hit).
•Then 7 comes (Page Hit).
•Then 8 replaces 6, 9 replaces 7. Then 1 comes (Page Hit).
Number of Page Faults = 9
• While using the First In First Out algorithm, the number
of page faults increases by increasing the number of
frames. This phenomenon is called Belady's Anomaly.

• Example 1: Consider page reference string 1, 3, 0, 3, 5,


6, 3 with 3 page frames.Find the number of page faults.
Least Recently Used(LRU)
• This algorithm works on previous data. The page which
is used the earliest is replaced or which appears the
earliest in the sequence is replaced.
• Example: Consider the Pages referenced by the CPU in
the order are 6, 7, 8, 9, 6, 7, 1, 6, 7, 8, 9, 1, 7, 9, 6
•First, all the frames are empty. 6, 7, 8 are allocated to the frames
(Page Fault).
•Now, 9 comes and replaces 6 which is used the earliest (Page Fault).
•Then, 6 replaces 7, 7 replaces 8, 1 replaces 9 (Page Fault).
•Then 6 comes which is already present (Page Hit).
•Then 7 comes (Page Hit).
•Then 8 replaces 1, 9 replaces 6, 1 replaces 7, and 7 replaces 8 (Page
Fault).
•Then 9 comes (Page Hit).
•Then 6 replaces 1 (Page Fault).
The number of Page Faults = 12
• Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4,
2, 3, 0, 3, 2, 3 with 4 page frames. Find number of page
faults.
Optimal Page Replacement
• In this algorithm, the page which would be used after
the longest interval is replaced. In other words, the
page which is farthest to come in the upcoming
sequence is replaced.
• Example: Consider the Pages referenced by the CPU in
the order are 6, 7, 8, 9, 6, 7, 1, 6, 7, 8, 9, 1, 7, 9, 6
•First, all the frames are empty. 6, 7, 8 are allocated to the frames
(Page Fault).
•Now, 9 comes and replaces 8 as it is the farthest in the upcoming
sequence. 6 and 7 would come earlier than that so not replaced.
•Then, 6 comes which is already present (Page Hit).
•Then 7 comes (Page Hit).
•Then 1 replaces 9 similarly (Page Fault).
•Then 6 comes (Page Hit), 7 comes (Page Hit).
•Then 8 replaces 6 (Page Fault) and 9 replaces 8 (Page Fault).
•Then 1, 7, 9 come respectively which are already present in the
memory.
•Then 6 replaces 9 (Page Fault), it can also replace 7 and 1 as no
other page is present in the upcoming sequence.
The number of Page Faults = 8
• Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3,
0, 3, 2, 3 with 4 page frame. Find number of page fault.
Segmentation
• Segmentation is another way of dividing the addressable
memory.
• It is another scheme of memory management and it generally
supports the user view of memory.
• The Logical address space is basically the collection of segments.
Each segment has a name and a length.
• Basically, a process is divided into segments. Like paging,
segmentation divides or segments the memory.
• But there is a difference and that is while the paging divides the
memory into a fixed size and on the other hand, segmentation
divides the memory into variable segments these are then
loaded into logical memory space.
• A Program is basically a collection of segments. And a
segment is a logical unit such as:
• main program procedure
• Function method
• Object local variable and
global variables.
• symbol table common block
• Stack arrays
Types of Segmentation

• Virtual Memory Segmentation: Each process is


divided into a number of segments, but the segmentation
is not done all at once. This segmentation may or may
not take place at the run time of the program.

• Simple Segmentation: Each process is divided into a


number of segments, all of which are loaded into memory
at run time, though not necessarily contiguously.
•.
Characteristics of Segmentation

• The Segmentation partitioning scheme is variable-


size.
• Partitions of the secondary memory are commonly
known as segments.
• Partition size mainly depends upon the length of
modules.
• Thus with the help of this technique, secondary memory
and main memory are divided into unequal-sized
partitions.
Need of Segmentation

• One of the important drawbacks of memory


management in the Operating system is the separation
of the user's view of memory and the actual physical
memory.
• and Paging is a technique that provides the separation
of these two memories.
• User 'view is basically mapped onto physical storage.
And this mapping allows differentiation between
Physical and logical memory.
• It is possible that the operating system divides the
same function into different pages and those pages may
or may not be loaded into the memory at the same time
also Operating system does not care about the User's
view of the process.
• Due to this technique system's efficiency decreases.
• Segmentation is a better technique because it divides
the process into segments.
User's View of a Program
• A computer system that is using segmentation has a
logical address space that can be viewed as multiple
segments. And the size of the segment is of the variable
that is it may grow or shrink.
• As we had already told you that during the execution
each segment has a name and length. And the address
mainly specifies both thing name of the segment and
the displacement within the segment.
• Therefore the user specifies each address with the help
of two quantities: segment name and offset.
• or simplified Implementation segments are numbered;
thus referred to as segment number rather than
segment name.
• Thus the logical address consists of two tuples:
• <segment-number,offset>
• where,
• Segment Number(s): Segment Number is used to
represent the number of bits that are required to
represent the segment.
• Offset(d) Segment offset is used to represent the
number of bits that are required to represent the size of
Segment Table

• A Table that is used to store the information of all


segments of the process is commonly known as
Segment Table. Generally, there is no simple
relationship between logical addresses and physical
addresses in this scheme.
• The mapping of a two-dimensional Logical address into
a one-dimensional Physical address is done using the
segment table.
• This table is mainly stored as a separate segment in the
main memory.
• The table that stores the base address of the segment
table is commonly known as the Segment table base
• In the segment table each entry has :
1.Segment Base/base address: The segment base mainly
contains the starting physical address where the segments
reside in the memory.
2.Segment Limit: The segment limit is mainly used to specify
the length of the segment.
• Segment Table Base Register(STBR) The STBR register is
used to point the segment table's location in the memory.
• Segment Table Length Register(STLR) This register
indicates the number of segments used by a program. The
segment number s is legal if s<STLR
Segmentation Hardware
• The logical address generated by CPU consist of two
parts:
• Segment Number(s): It is used as an index into the
segment table.
• Offset(d): It must lie in between '0' and 'segment
limit'.In this case, if the Offset exceeds the segment
limit then the trap is generated.
• Thus; correct offset+segment base= address in
Physical memory
• and segment table is basically an array of base-limit
register pair
Advantages of Segmentation

• In the Segmentation technique, the segment table is


mainly used to keep the record of segments. Also, the
segment table occupies less space as compared to the
paging table.
• There is no Internal Fragmentation.
• Segmentation generally allows us to divide the program
into modules that provide better visualization.
• Segments are of variable size.
Disadvantages of Segmentation

• This technique is expensive.


• The time is taken in order to fetch the instruction
increases since now two memory accesses are required.
• Segments are of unequal size in segmentation and thus
are not suitable for swapping.
• This technique leads to external fragmentation as the
free space gets broken down into smaller pieces along
with the processes being loaded and removed from the
main memory then this will result in a lot of memory
waste.
Example of Segmentation

• There are five segments numbered from 0 to 4.


• These segments will be stored in Physical memory as
shown.
• There is a separate entry for each segment in the
segment table which contains the beginning entry
address of the segment in the physical
memory( denoted as the base) and also contains the
length of the segment(denoted as limit).
Difference between Paging and Segmentation
Fragmentation
• The process of dividing a computer file, such as a data
file or an executable program file, into fragments that
are stored in different parts of a computer’s storage
medium, such as its hard disc or RAM, is known as
fragmentation in computing.
• When a file is fragmented, it is stored on the storage
medium in non-contiguous blocks, which means that the
blocks are not stored next to each other.

You might also like