OS Unit 4
OS Unit 4
Memory
Management
Syllabus:
Memory Management Virtual Memory Management
5. Segmentation
Logical Versus Physical Address Space
In operating systems, logical and physical addresses are used to manage and access memory.
Logical address:
➔ An address generated by the CPU during program executionis commonly
➔ The set of all logical addresses generated by a program is a logical address space.
➔ The process accesses memory using logical addresses, which are translated into
➔ The actual transfer of the 100 MB process to or from main memory takes
We can place the operating system in either low memory or high memory.
Memory Protection
★ When the CPU scheduler selects a process for execution, the dispatcher loads the
relocation and limit registers with the correct values.
★ Because every address generated by the CPU is checked against these registers, we can
protect both the operating system and the other users' programs and data.
★ In Memory protection, we have to protect the operating system from user processes and
which can be done by using a relocation register with a limit register.
★ The relocation register has the value of the smallest physical address whereas the limit
register has the range of the logical addresses.
★ These two registers have some conditions like each logical address must be less than the limit
register.
★ The memory management unit(MMU) is used to translate the logical address with the value in
the relocation register dynamically after which the translated (or mapped) address is then sent
to memory.
● In the above diagram, when the scheduler selects a process for the execution process, the dispatcher,
on the other hand, is responsible for loading the relocation and limit registers with the correct
values as part of the context switch as every address generated by the CPU is checked against these 2
registers, and we may protect the operating system, programs, and the data of the users from being
altered by this running process.
Fixed(Static)Partition:
request of a method.
●
However, the process’s memory request cannot be fulfilled
because the memory offered is in a non-contiguous manner.
Example :
Process P1(2MB) and process P3(1MB) completed their
execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s
suppose process P5 of size 3MB comes. The empty space in
memory cannot be allocated as the memory offered is in a
non-contiguous manner.
Best Fit:
● In the Best Fit memory allocation strategy, the operating system searches the entire
memory space and allocates the smallest available block that is large enough to
accommodate the process.
● It aims to minimize internal fragmentation.
Worst Fit:
● In the Worst Fit memory allocation strategy, the operating system searches the entire
memory space and allocates the largest available block to the process.
● The page table contains the base address of each page in physical memory.
● This base address is combined with the page offset to define the physical memory
Page 1 Frame 1
Page 2 Frame 2
Page 3 Frame 3
Frame 4
Frame 5
Frame 6
Paging example for a 32-byte memory
with 4-byte pages
Frame 7
Finding Physical Address for the corresponding Logical address
Formula:
Physical Address = In which frame the page is available * Page size + Offset
➔ Logical address 0 is page 0, offset 0. Indexing into the page table, we find that page 0
is in frame 5.
➔ Thus, logical address 0 maps to physical address 20 [= (5 × 4) + 0].
➔ Logical address 3 (page 0, offset 3) maps to physical address 23 [= (5 × 4) + 3].
➔ Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to
frame 6.
➔ Thus, logical address 4 maps to physical address 24 [= (6 × 4) + 0].
➔ Logical address 13 maps to physical address 9.
Translation Look-a Side Buffer (TLB)
Or
Paging Hardware with TLB
● The implementation of page table or where we have to store page table either in
○ Registers or
○ Main Memory(RAM)
● The page table is implemented as a set of dedicated registers. These registers should be
built with very high-speed logic to make the paging-address translation efficient.
● The use of registers for the page table is satisfactory if the page table is reasonably
small (for example, 256 entries).
● Most contemporary computers, however, allow the page table to be very large (for
example, 1 million entries).the use of fast registers to implement the page table is not
feasible.
Main Memory
● When a logical address is generated by the CPU, its page number is presented to the TLB.
● If the page number is found(known as a TLB Hit), its frame number is immediately
available and is used to access memory.
● If the page number is not in the TLB (known as a TLB miss), a memory reference to the
page table must be made.
● When the frame number is obtained, we can use it to access memory (Figure8.14).
● In addition, we add the page number and frame number to the TLB, so that they will be found
quickly on the next reference.
● If the TLB is already full of entries, an existing entry must be selected for replacement.
● Replacement policies range from least recently used (LRU) through round-robin to random.
Memory Protection in a Paged Environment
● Memory protectionin a paged environment is accomplished by
protection bits associated with each frame.
● Normally, these bits are kept in the page table.
● Every reference to memory goes through the page table to find the correct frame
number.
● At the same time that the physical address is being computed, the protection bits can be
checked to verify that no writes are being made to a read-only page.
● An attempt to write to a read-only page causes a hardware trap to the operating system
(or memory-protection violation).
● One additional bit is generally attached to each entry in the page table
● One additional bit is generally attached to each entry in the page table: a
○ Valid bit
○ Invalid bit.
● When this bit is set to valid, the associated page is in the process’s logical address space
and is thus a legal (or valid) page.
● When the bit is set to invalid, the page is not in the process’s logical address space.
● Illegal addresses are trapped by use of the valid–invalid bit.
● The operating system sets this bit for each page to allow or disallow access to the page.
● Suppose, for example, that in a system with a 14-bit address space (0 to 16383), we have
a program that should use only addresses 0 to 10468.
● Given a page size of 2 KB, we have the situation shown in Figure 8.15.
● Addresses in pages 0, 1, 2, 3, 4, and 5 are mapped normally through the page table.
● Any attempt to generate an address in pages 6 or 7, however, will find that the
valid–invalid bit is set to invalid, and the computer will trap to the operating system
(invalid page reference).
Structure of the Page Table
Or
Types of Page Tables
Some of the common techniques that are used for structuring the Page table are as follows:
1. Hierarchical Paging
2. Hashed Page Tables
3. Inverted Page Tables
Hierarchical Paging
● Hierarchical paging or Multilevel paging is a type of paging where the logical
level 2 page tables are pointers to a level 3 page table and so on.
● The entries of the last level page table store actual frame information.
● The advantage of using hierarchical paging is that it allows the operating system to
● Clearly, we would not want to allocate the page table contiguously in main memory.
● One simple solution to this problem is to divide the page table into smaller pieces.We
can accomplish this division in several ways.
● One way is to use a two-level paging algorithm, in which the page table itself is also
paged (Figure 8.17 Previous Slide).
● For example, consider again the system with a 32-bit logical address space ( logical
memory = 2^32) and a page size of 4 KB(2^12).
● address translation works from the outer page table inward, this scheme is also known
as a forward-mapped page table.
Hashed Page Tables
★ A common approach for handling address spaces larger than 32 bits is to use a hashed
page table, with the hash value being the virtual page number.
★ Each entry in the hash table contains a linked list of elements that hash to the same
○ The virtual page number in the virtual address is hashed into the hash table.
○ The virtual page number is compared with field 1 in the first element in the
linked list.
○ If there is a match, the corresponding page frame (field 2) is used to form the
○ If there is no match, subsequent entries in the linked list are searched for a
★ Unlike a traditional page table, which maps virtual addresses to physical addresses on a
per-process basis, an inverted page table maps physical addresses to virtual addresses
across all processes.
★ In a typical virtual memory system, each process has its own page table that maps
virtual addresses to physical addresses.
★ This approach requires a separate page table for each process, which can consume a
significant amount of memory for systems with a large number of processes.
★ In contrast, an inverted page table maintains a single table that contains entries for each
physical frame in the system. Each entry in the inverted page table stores information
about the process and virtual page that is currently mapped to that physical frame.
Segmentation
★ Like Paging, Segmentation is another non-contiguous memory allocation technique.
★ In segmentation, secondary memory and main memory are divided into partitions of
unequal size.
Step 1:
● CPU always generates a logical address, It consisting of two parts-
○ Segment Number
○ Segment Offset
● Segment Number specifies the specific segment of the process from which CPU
wants to read the data.
● Segment Offset specifies the specific word in the segment that CPU wants to read.
Step 2:
● For the generated segment number, corresponding entry is located in the segment table.
● Then, segment offset is compared with the limit (size) of the segment.
● If segment offset is found to be greater than or equal to the limit , a trap is generated
● If segment offset is found to be smaller than the limit, then request is treated as a valid
request
● Each entry in the segment table has a segment base and a segment limit.
○ The segment base contains the starting physical address where the segment resides
in memory,
○ The segment limit specifies the length of the segment.
Eg:
➔ We have five segments numbered from 0 through 4. The segments are stored in physical
memory as shown(Next Slide).
➔ The segment table has a separate entry for each segment, giving the beginning address of
the segment in physical memory (or base) and the length of that segment (or limit).
➔ For example, segment 2 is 400 bytes long and begins at location 4300.
➔ Thus, a reference to byte 53 of segment 2 is mapped onto location 4300 + 53 = 4353.
➔ A reference to segment 3, byte 852, is mapped to 3200 (the base of segment 3) + 852 =
4052. A reference to byte 1222 of segment 0 would result in a trap to the operating system,
as this segment is only 1,000 bytes long.
Virtual Memory
● The term virtual memory refers to something which appears to be present but actually
it is not.
● The virtual memory technique allows users to use more memory for a program than
the real memory of a computer.
● So, virtual memory is the concept that gives the illusion to the user that they will
have main memory equal to the capacity of secondary storage media.
Concept of Virtual Memory
● A programmer can write a program which requires more memory space than the
capacity of the main memory. Such a program is executed by virtual memory
technique.
● The program is stored in the secondary memory. The memory management unit
(MMU) transfers the currently needed part of the program from the secondary
● This to and from movement of instructions and data (parts of a program) between the
■ Demand paging
■ Demand segmentation
Demand Paging
● Consider how an executable program might be loaded from disk into memory.
● One option is to load the entire program in physical memory at program execution time.
● However, a problem with this approach is that we may not initially need the entire
program in memory.
● An alternative strategy is to load pages only as they are needed. This technique is known
as demand paging and is commonly used in virtual memory systems.
● With demand-paged virtual memory, pages are loaded only when they are demanded
during program execution.
● A demand-paging system is similar to a paging system with swapping (Figure 9.4) where
processes reside in secondary memory (usually a disk).
● When we want to execute a process, we swap it into memory. Rather than swapping the
entire process into memory, though, we use a lazy swapper.
● A lazy swapper never swaps a page into memory unless that page will be needed
● When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again Instead of swapping in a whole process, the pager brings
only those necessary pages into memory.
● Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the
swap time and the amount of physical memory needed.
● During MMU address translation, if valid–invalid bit in page table entry is i → page fault
Page Table When Some Pages Are Not in Main Memory
When a process references a page that is not currently in memory, a page fault occurs, triggering the
operating system to load the required page from secondary storage into an available page frame in main
memory.
The procedure for handling this page fault is straightforward (Figure -Previous Slide):
1. We check an internal table (usually kept with the process control block) for this process to
determine whether the reference was a valid or an invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid but we have not yet
brought in that page, we now page it in.
3. We find a free frame (by taking one from the free-frame list, for example).
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and the
page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the trap. The process can now access the
page as though it had always been in memory.
Page Replacement
Algorithms
★ Page replacement algorithms are used in demand paging systems to determine which page
should be remove from Main Memory when a page fault occurs and there are no page
free
frames available.
★ The goal of these algorithms is to minimize the number of page faults and optimize system
performance.
FIFO Page Replacement Algorithm(Belady’s
anomaly)
★ As the name suggests, this algorithm works on the principle of “First in First
out“.
★ It replaces the oldest page that has been present in the main memory for the
longest time.
4 , 7, 6, 1, 7, 6, 1, 2, 7, 2
A system uses 3 page frames for storing process pages in main memory. It uses the
First in First out (FIFO) page replacement policy. Assume that all the page frames are
initially empty. What is the total number of page faults that will occur while
processing the page reference string given below-
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the First in First out
(FIFO) page replacement policy. Assume that all the page frames are initially empty. What is the total
number of page faults that will occur while processing the page reference string given below-
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
★ This algorithm replaces the page that will not be referred by the CPU in
future for the longest time.
★ It is practically impossible to implement this algorithm.
★ This is because the pages that will not be used in future for the longest time can
not be predicted.
★ However, it is the best known algorithm and gives the least number of page
faults.
★ Hence, it is used as a performance measure criterion for other algorithms.
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the Optimal page
replacement policy. Assume that all the page frames are initially empty. What is the total number of
page faults that will occur while processing the page reference string given below--
4 , 7, 6, 1, 7, 6, 1, 2, 7, 2
A system uses 3 page frames for storing process pages in main memory. It uses the
Optimal page replacement policy. Assume that all the page frames are initially empty.
What is the total number of page faults that will occur while processing the page
reference string given below--
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the Optimal page
replacement policy. Assume that all the page frames are initially empty. What is the total number of
page faults that will occur while processing the page reference string given below--
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
Recently Used“.
★ It replaces the page that has not been referred by the CPU for the longest time.
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently
Used (LRU) page replacement policy. Assume that all the page frames are initially empty. What is the
total number of page faults that will occur while processing the page reference string given below-
4 , 7, 6, 1, 7, 6, 1, 2, 7, 2
A system uses 3 page frames for storing process pages in main memory. It uses the
LRU replacement policy. Assume that all the page frames are initially empty. What is
the total number of page faults that will occur while processing the page reference
string given below--
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the LRU
replacement policy. Assume that all the page frames are initially empty. What is the total number of
page faults that will occur while processing the page reference string given below--
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
Operating system also uses demand segmentation, which is similar to demand paging.
Operating system to uses demand segmentation where there is insufficient hardware
available to implement ‘Demand Paging’.
The segment table has a valid bit to specify if the segment is already in physical memory or
not. If a segment is not in physical memory then segment fault results, which traps to the
operating system and brings the needed segment into physical memory, much like a page
fault.
Demand segmentation allows for pages that are often referenced with each other to be
brought into memory together, this decreases the number of page faults.
Another space server would be to keep some of a segment’s page tables on disk and swap
them into memory when needed.
Thrashing
● Thrashing : The processor spends most of its time swapping process (process
page faults) rather than executing user instructions. (or)
Techniques to handle:
1. Working Set Model
working-set size, WSSi, for each process in the system, we can then consider that
● Each process is actively using the pages in its working set. Thus, process i needs WSSi
frames.
● Now, if ‘m’ is the number of frames available in the memory, there are 2 possibilities:
○ If the total demand is greater than the total number of available frames (D> m),
thrashing will occur, because some processes will not have enough frames.
● The set of pages in the most recent page references is the working set (Figure 9.20).
reference.