0% found this document useful (0 votes)
8 views

OS Unit 4

Uploaded by

ramketha07
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

OS Unit 4

Uploaded by

ramketha07
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

Unit 4

Memory
Management
Syllabus:
Memory Management Virtual Memory Management

1. Swapping 1. Virtual Memory

2. Contiguous Memory Allocation 2. Demand Paging

3. Paging 3. Page-Replacement Algorithms

4. Structure of the Page Table 4. Thrashing

5. Segmentation
Logical Versus Physical Address Space
In operating systems, logical and physical addresses are used to manage and access memory.

Logical address:
➔ An address generated by the CPU during program executionis commonly

referred to as a logical address.

➔ A logical address, also known as a virtual address.

➔ The set of all logical addresses generated by a program is a logical address space.
➔ The process accesses memory using logical addresses, which are translated into

physical addresses by the operating system.


Physical addres: actual address in main memory

➔ A physical address is thewhere data is stored.


➔ It is a location in physical memory, as opposed to a virtual address.
➔ The set of all physical addresses corresponding to these logical addresses is a physical
address space
➔ The run-time mapping from virtual to physical addresses is done by a hardware
device called the Memory-Management Unit (MMU).
➔ The base register is now called a relocation register.
➔ The value in the relocation register is added to every address generated by a user
process at the time the address is sent to memory.
➔ For example, if the base is at 14000, then an attempt by the user to address location 0
is dynamically relocated to location14000; an access to location 346 is mapped to
location 14346.
➔ The MMU uses a page table to translate logical addresses into physical addresses.
Address Binding
1. Compile Time Binding

2. Load Time Binding

3. Execution Time Binding


SWAPPING
➔ To increase CPU utilization in multiprogramming, a memory management scheme known
as swapping can be used.
➔ Standard swapping involves moving processes between main memory and a backing
store(Hard disk).
➔ Swapping is a memory management scheme in which any process can be temporarily
swapped from main memory to secondary memory so that the main memory can be
made available for other processes.
➔ The purpose of swapping in an operating system is to access data on a hard disc and move
it to RAM so that application programs can use it.
➔ It’s important to remember that swapping is only used when data isn’t available in
RAM.
Swapping has been subdivided into two concepts: swap-in and swap-out.
Swap-in is a method of transferring a program from a hard disc to main memory, or RAM.
Swap-out is a technique for moving a process from RAM to the hard disc.
➔ System maintains a ready queue of ready-to-run processes which have memory images on
disk.
➔ Whenever the CPU scheduler decides to execute a process, it calls the dispatcher.
➔ The dispatcher checks to see whether the next process in the queue is in memory
➔ If it is not, and if there is no free memory region, the dispatcher swaps out a process
➔ currently in memory and swaps in the desired process.
➔ The CPU scheduler determines which processes are swapped in and which are swapped
out.
◆ Consider a multiprogramming environment that employs a priority-based scheduling
algorithm. When a high-priority process enters the Ready queue, a low-priority
process is swapped out so the high-priority process can be loaded and executed.
◆ When a high-priority process terminates, the low priority process is swapped back
into memory to continue its execution.
➔ The context-switch time in such a swapping system is fairly high.
➔ To get an idea of the context-switch time, let’s assume that the user process is 100 MB in
size and hard disk with a transfer rate of 50 MB per second.

➔ The actual transfer of the 100 MB process to or from main memory takes

Time = Processor Size / Transfer Rate

= 100 MB/50 MB per second

= 2 seconds or 2000 milliseconds


➔ The swap time is 2000 milliseconds. Since we must swap both out and in, the total swap
time is about 4,000 milliseconds.
Memory Allocation
Contiguous Memory Allocation
● Contiguous Memory Allocation is a type of memory allocation technique where
processes are allotted a continuous block of space in memory.
● This block can be of fixed size for all the processes in a fixed size partition scheme or
can be of variable size depending on the requirements of the process in a variable size
partition scheme.
● The main memory must accommodate both the operating system and the various user
processes.We therefore need to allocate main memory in the most efficient way possible.

● The Main memory is usually divided into two partitions:


○ one for the operating system
○ one for the user processes.

We can place the operating system in either low memory or high memory.
Memory Protection
★ When the CPU scheduler selects a process for execution, the dispatcher loads the
relocation and limit registers with the correct values.
★ Because every address generated by the CPU is checked against these registers, we can
protect both the operating system and the other users' programs and data.
★ In Memory protection, we have to protect the operating system from user processes and
which can be done by using a relocation register with a limit register.
★ The relocation register has the value of the smallest physical address whereas the limit
register has the range of the logical addresses.

★ These two registers have some conditions like each logical address must be less than the limit
register.
★ The memory management unit(MMU) is used to translate the logical address with the value in
the relocation register dynamically after which the translated (or mapped) address is then sent
to memory.
● In the above diagram, when the scheduler selects a process for the execution process, the dispatcher,
on the other hand, is responsible for loading the relocation and limit registers with the correct
values as part of the context switch as every address generated by the CPU is checked against these 2
registers, and we may protect the operating system, programs, and the data of the users from being
altered by this running process.
Fixed(Static)Partition:

● One of the simplest methods for allocating memory is to


divide memory into several fixed-sized partitions. Each
RAM
partition may contain exactly one process.
● Thus, the degree of multiprogramming is bound by the
number of partitions.
● In this multiple partition method, when a partition is
free, a process is selected from the input queue and is
loaded into the free partition.
● When the process terminates,the partition becomes
available for another process.
● In Fixed partition we found a problem called “ Internal
Fragmentation”.
Internal Fragmentation
RAM
● Internal fragmentation happens when the memory is split
into mounted-sized blocks.
● Whenever a process is requested for the memory, the
mounted-sized block is allotted to the process.
● In the case where the memory allocated to the process is
somewhat larger than the memory requested, a free space is
created in the given memory block.

● Due to this, the free space of the memory block is unused,


which causes internal fragmentation.
● The difference between allotted and requested memory is
called internal fragmentation Internal Fragmentation
(Unused space)
Variable(Dynamic)Partition:
● It is a part of Contiguous allocation technique. It is used to alleviate the
problem faced by Fixed Partitioning.
● In contrast with fixed partitioning, partitions are not made before the
execution.
● Various features associated with variable Partitioning-
○ Initially RAM is empty and partitions are made during the run-time
according to process’s need instead of partitioning during system
configure.

○ The size of partition will be equal to incoming process.


○ The partition size varies according to the need of the process so that the
internal fragmentation can be avoided to ensure efficient utilisation of
RAM.
○ Number of partitions in RAM is not fixed and depends on the number
of incoming process and Main Memory’s size.
RAM
External Fragmentation
● External fragmentation happens when there’s a sufficient

quantity of area within the memory to satisfy the memory

request of a method.

However, the process’s memory request cannot be fulfilled
because the memory offered is in a non-contiguous manner.

Example :
Process P1(2MB) and process P3(1MB) completed their
execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s
suppose process P5 of size 3MB comes. The empty space in
memory cannot be allocated as the memory offered is in a
non-contiguous manner.

P5 of size 3 MB cannot be accommodated


First Fit, Best Fit, and Worst Fit Memory Allocation strategies
First Fit:
● In the First Fit memory allocation strategy, the operating system searches for the first
available memory block that is large enough to accommodate a process.
● It starts searching from the beginning of the memory and allocates the first block that
meets the size requirement.
● It may result in significant fragmentation, both internal and external

Best Fit:
● In the Best Fit memory allocation strategy, the operating system searches the entire
memory space and allocates the smallest available block that is large enough to
accommodate the process.
● It aims to minimize internal fragmentation.
Worst Fit:
● In the Worst Fit memory allocation strategy, the operating system searches the entire
memory space and allocates the largest available block to the process.

● It aims to maximize external fragmentation.


● It leads to more internal fragmentation, as smaller processes may be allocated in larger
blocks, leaving unused memory within the allocated blocks.
Paging
● Paging is a memory-management scheme that permits the physical address space of a process
to be noncontiguous.
● Paging is a memory management scheme that eliminates the need for contiguous allocation of
physical memory.
● The process of retrieving processes in the form of pages from the secondary storage into the main
memory is known as paging

● The basic method for paging involves


○ Breaking physical memory into fixed-sized blocks called - - - > Frames
○ Breaking logical memory into blocks of the same size called - - - > Pages
○ When a process is to be executed, its pages are loaded into any available memory frames from
their source (a file system or the backing store).
● The hardware support for paging is illustrated in Figure
● Every address generated by the CPU (Logical Address) is divided into two parts

○ (1) a page number (p)

○ (2) a page offset(d)

● The page number is used as an index into a page table.

● The page table contains the base address of each page in physical memory.

● This base address is combined with the page offset to define the physical memory

address that is sent to the memory unit.


Page 0 Frame 0

Page 1 Frame 1

Page 2 Frame 2

Page 3 Frame 3

Frame 4

Frame 5

Frame 6
Paging example for a 32-byte memory
with 4-byte pages
Frame 7
Finding Physical Address for the corresponding Logical address
Formula:
Physical Address = In which frame the page is available * Page size + Offset

➔ Logical address 0 is page 0, offset 0. Indexing into the page table, we find that page 0
is in frame 5.
➔ Thus, logical address 0 maps to physical address 20 [= (5 × 4) + 0].
➔ Logical address 3 (page 0, offset 3) maps to physical address 23 [= (5 × 4) + 3].
➔ Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to
frame 6.
➔ Thus, logical address 4 maps to physical address 24 [= (6 × 4) + 0].
➔ Logical address 13 maps to physical address 9.
Translation Look-a Side Buffer (TLB)
Or
Paging Hardware with TLB

● The implementation of page table or where we have to store page table either in
○ Registers or
○ Main Memory(RAM)
● The page table is implemented as a set of dedicated registers. These registers should be
built with very high-speed logic to make the paging-address translation efficient.
● The use of registers for the page table is satisfactory if the page table is reasonably
small (for example, 256 entries).
● Most contemporary computers, however, allow the page table to be very large (for
example, 1 million entries).the use of fast registers to implement the page table is not
feasible.
Main Memory

● The page table is kept in main memory.


● A page-table base register (PTBR) points to the page table.
● Page-table length register (PRLR) indicates size of the page table
● In this scheme every data/instruction access requires two memory accesses.
○ One for the page table
○ One for the data/instruction
● The two memory access problem can be solved by the use of a special fast-lookup
hardware cache called Associative Memory or Translation Look-a side Buffers
(TLBs).
● The TLB is used with page tables in the following way. The TLB contains only a few of the
page-table entries(Contains Most frequently accessed pages).

● When a logical address is generated by the CPU, its page number is presented to the TLB.
● If the page number is found(known as a TLB Hit), its frame number is immediately
available and is used to access memory.
● If the page number is not in the TLB (known as a TLB miss), a memory reference to the
page table must be made.

● When the frame number is obtained, we can use it to access memory (Figure8.14).
● In addition, we add the page number and frame number to the TLB, so that they will be found
quickly on the next reference.

● If the TLB is already full of entries, an existing entry must be selected for replacement.
● Replacement policies range from least recently used (LRU) through round-robin to random.
Memory Protection in a Paged Environment
● Memory protectionin a paged environment is accomplished by
protection bits associated with each frame.
● Normally, these bits are kept in the page table.
● Every reference to memory goes through the page table to find the correct frame
number.
● At the same time that the physical address is being computed, the protection bits can be
checked to verify that no writes are being made to a read-only page.
● An attempt to write to a read-only page causes a hardware trap to the operating system
(or memory-protection violation).

● One additional bit is generally attached to each entry in the page table
● One additional bit is generally attached to each entry in the page table: a
○ Valid bit
○ Invalid bit.
● When this bit is set to valid, the associated page is in the process’s logical address space
and is thus a legal (or valid) page.
● When the bit is set to invalid, the page is not in the process’s logical address space.
● Illegal addresses are trapped by use of the valid–invalid bit.
● The operating system sets this bit for each page to allow or disallow access to the page.
● Suppose, for example, that in a system with a 14-bit address space (0 to 16383), we have
a program that should use only addresses 0 to 10468.
● Given a page size of 2 KB, we have the situation shown in Figure 8.15.
● Addresses in pages 0, 1, 2, 3, 4, and 5 are mapped normally through the page table.
● Any attempt to generate an address in pages 6 or 7, however, will find that the
valid–invalid bit is set to invalid, and the computer will trap to the operating system
(invalid page reference).
Structure of the Page Table
Or
Types of Page Tables

Some of the common techniques that are used for structuring the Page table are as follows:

1. Hierarchical Paging
2. Hashed Page Tables
3. Inverted Page Tables
Hierarchical Paging
● Hierarchical paging or Multilevel paging is a type of paging where the logical

address space is broken up into multiple page tables.


● The entries of the level 1 page table are pointers to a level 2 page table and entries of the

level 2 page tables are pointers to a level 3 page table and so on.

● The entries of the last level page table store actual frame information.

● The advantage of using hierarchical paging is that it allows the operating system to

efficiently manage large logical address spaces.


● By dividing the page table into multiple levels, the operating system can minimize the

amount of memory required to store the page table.


Why it is required?
● Most modern computer systems support a large logical address space (2^32 to 2^64).
In such an environment, the page table itself becomes excessively large.
● For example, consider a system with a 32-bit logical address space.
● If the page size in such a system is 4 KB (2^12), then a page table may consist of up to
1 million entries (2^32 / 2^12 = 2^20).
● Assuming that each entry consists of 4 bytes, each process may need up to 4 MB of
physical address space for the page table alone.

● Clearly, we would not want to allocate the page table contiguously in main memory.
● One simple solution to this problem is to divide the page table into smaller pieces.We
can accomplish this division in several ways.
● One way is to use a two-level paging algorithm, in which the page table itself is also
paged (Figure 8.17 Previous Slide).
● For example, consider again the system with a 32-bit logical address space ( logical
memory = 2^32) and a page size of 4 KB(2^12).

● A logical address is divided into


○ a page number consisting of 20 bits (i.e 2^20 Pages)
○ a page offset consisting of 12 bits.
● Because we page the page table, the page number is further divided into a 10-bit page
number and a 10-bit page offset. Thus, a logical address is as follows:
● where p1 is an index into the outer page table and p2 is the displacement within the page
of the inner page table.

● The address-translation method for this architecture is shown in Figure

● address translation works from the outer page table inward, this scheme is also known
as a forward-mapped page table.
Hashed Page Tables
★ A common approach for handling address spaces larger than 32 bits is to use a hashed

page table, with the hash value being the virtual page number.
★ Each entry in the hash table contains a linked list of elements that hash to the same

location (to handle collisions).

★ Each element consists of three fields:

○ (1) The virtual page number

○ (2) The value of the mapped page frame

○ (3) A pointer to the next element in the linked list.


★ The algorithm works as follows:

○ The virtual page number in the virtual address is hashed into the hash table.
○ The virtual page number is compared with field 1 in the first element in the

linked list.

○ If there is a match, the corresponding page frame (field 2) is used to form the

desired physical address.

○ If there is no match, subsequent entries in the linked list are searched for a

matching virtual page number. This scheme is shown in Figure 8.19.


Inverted Page Tables
★ An inverted page table is a data structure used in operating systems for efficient memory
management in virtual memory systems.

★ Unlike a traditional page table, which maps virtual addresses to physical addresses on a
per-process basis, an inverted page table maps physical addresses to virtual addresses
across all processes.
★ In a typical virtual memory system, each process has its own page table that maps
virtual addresses to physical addresses.
★ This approach requires a separate page table for each process, which can consume a
significant amount of memory for systems with a large number of processes.
★ In contrast, an inverted page table maintains a single table that contains entries for each
physical frame in the system. Each entry in the inverted page table stores information
about the process and virtual page that is currently mapped to that physical frame.
Segmentation
★ Like Paging, Segmentation is another non-contiguous memory allocation technique.

★ In segmentation, process is not divided blindly into fixed size pages,Segmentation is

a variable size partitioning scheme.

★ In segmentation, secondary memory and main memory are divided into partitions of

unequal size.

★ The size of partitions depend on the length of modules.

★ The partitions of secondary memory are called as segments.

★ The process is divided into modules for better visualization.


Programmer’s view of a program
Segment
table:
★ Segment table is a table that stores the information
about each segment of the process.
★ It has two columns.
○ First column stores limitthe size or length of the
○ Second column stores base
segment.
★ Segment table is stored as a separate segment in the
the base address or
main memory.
starting address of the segment
★ Segment table base register (STBR) stores the base
address of the segment table.
In accordance to the above segment
table, the segments are stored in the
main memory as
Translating Logical Address into Physical Address

Step 1:
● CPU always generates a logical address, It consisting of two parts-
○ Segment Number
○ Segment Offset
● Segment Number specifies the specific segment of the process from which CPU
wants to read the data.
● Segment Offset specifies the specific word in the segment that CPU wants to read.
Step 2:
● For the generated segment number, corresponding entry is located in the segment table.
● Then, segment offset is compared with the limit (size) of the segment.
● If segment offset is found to be greater than or equal to the limit , a trap is generated
● If segment offset is found to be smaller than the limit, then request is treated as a valid
request
● Each entry in the segment table has a segment base and a segment limit.
○ The segment base contains the starting physical address where the segment resides
in memory,
○ The segment limit specifies the length of the segment.
Eg:
➔ We have five segments numbered from 0 through 4. The segments are stored in physical
memory as shown(Next Slide).
➔ The segment table has a separate entry for each segment, giving the beginning address of
the segment in physical memory (or base) and the length of that segment (or limit).
➔ For example, segment 2 is 400 bytes long and begins at location 4300.
➔ Thus, a reference to byte 53 of segment 2 is mapped onto location 4300 + 53 = 4353.
➔ A reference to segment 3, byte 852, is mapped to 3200 (the base of segment 3) + 852 =
4052. A reference to byte 1222 of segment 0 would result in a trap to the operating system,
as this segment is only 1,000 bytes long.
Virtual Memory
● The term virtual memory refers to something which appears to be present but actually
it is not.
● The virtual memory technique allows users to use more memory for a program than
the real memory of a computer.
● So, virtual memory is the concept that gives the illusion to the user that they will
have main memory equal to the capacity of secondary storage media.
Concept of Virtual Memory
● A programmer can write a program which requires more memory space than the
capacity of the main memory. Such a program is executed by virtual memory

technique.

● The program is stored in the secondary memory. The memory management unit

(MMU) transfers the currently needed part of the program from the secondary

memory to the main memory for execution.

● This to and from movement of instructions and data (parts of a program) between the

main memory and the secondary memory is called Swapping.


Implementation of Virtual Memory in Operating System
● Virtual memory enables efficient memory management, provides the flexibility to run programs
larger than the available physical memory.

*Real-time examples that demonstrate the use of virtual memory

Running Multiple Applications:


● Suppose you have several applications open on your computer, such as a web browser, a
word processor, and a media player. Each application requires memory to store its code and
data.
● Virtual memory allows these applications to coexist in memory, even if the physical
memory is limited.
● The operating system allocates a portion of virtual memory to each application, swapping
data between physical memory and disk as needed.
● Virtual memory can be implemented via:

■ Demand paging

■ Demand segmentation
Demand Paging
● Consider how an executable program might be loaded from disk into memory.
● One option is to load the entire program in physical memory at program execution time.
● However, a problem with this approach is that we may not initially need the entire
program in memory.
● An alternative strategy is to load pages only as they are needed. This technique is known
as demand paging and is commonly used in virtual memory systems.
● With demand-paged virtual memory, pages are loaded only when they are demanded
during program execution.
● A demand-paging system is similar to a paging system with swapping (Figure 9.4) where
processes reside in secondary memory (usually a disk).
● When we want to execute a process, we swap it into memory. Rather than swapping the
entire process into memory, though, we use a lazy swapper.
● A lazy swapper never swaps a page into memory unless that page will be needed
● When a process is to be swapped in, the pager guesses which pages will be used before the
process is swapped out again Instead of swapping in a whole process, the pager brings
only those necessary pages into memory.
● Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the
swap time and the amount of physical memory needed.

● Hardware support is required to distinguish between


○ Those pages that are in memory
○ Those pages that are on the disk using the valid-invalid bit scheme.
● Where valid and invalid pages can be checked checking the bit and marking a page will
have no effect if the process never attempts to access the pages.
● While the process executes and accesses pages that are memory resident, execution
proceeds normally.
Valid-Invalid Bit
● With each page table entry a valid–invalid bit is associated
(v → in-memory – memory resident, i → not-in-memory)
● Initially valid–invalid bit is set to i on all entries
● Example of a page table snapshot:

● During MMU address translation, if valid–invalid bit in page table entry is i → page fault
Page Table When Some Pages Are Not in Main Memory
When a process references a page that is not currently in memory, a page fault occurs, triggering the
operating system to load the required page from secondary storage into an available page frame in main
memory.
The procedure for handling this page fault is straightforward (Figure -Previous Slide):
1. We check an internal table (usually kept with the process control block) for this process to
determine whether the reference was a valid or an invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid but we have not yet
brought in that page, we now page it in.

3. We find a free frame (by taking one from the free-frame list, for example).
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and the
page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the trap. The process can now access the
page as though it had always been in memory.
Page Replacement
Algorithms
★ Page replacement algorithms are used in demand paging systems to determine which page
should be remove from Main Memory when a page fault occurs and there are no page
free
frames available.
★ The goal of these algorithms is to minimize the number of page faults and optimize system
performance.
FIFO Page Replacement Algorithm(Belady’s
anomaly)
★ As the name suggests, this algorithm works on the principle of “First in First

out“.
★ It replaces the oldest page that has been present in the main memory for the

longest time.

★ It is implemented by keeping track of all the pages in a queue.


[Q] A system uses 3 page frames for storing process pages in main memory. It uses the First in First out
(FIFO) page replacement policy. Assume that all the page frames are initially empty. What is the total
number of page faults that will occur while processing the page reference string given below-

4 , 7, 6, 1, 7, 6, 1, 2, 7, 2

Total number of page faults occurred = 6


[Q]

A system uses 3 page frames for storing process pages in main memory. It uses the
First in First out (FIFO) page replacement policy. Assume that all the page frames are
initially empty. What is the total number of page faults that will occur while
processing the page reference string given below-

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the First in First out
(FIFO) page replacement policy. Assume that all the page frames are initially empty. What is the total
number of page faults that will occur while processing the page reference string given below-

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

Total number of page faults occurred = 15


Optimal Page Replacement

★ This algorithm replaces the page that will not be referred by the CPU in
future for the longest time.
★ It is practically impossible to implement this algorithm.
★ This is because the pages that will not be used in future for the longest time can
not be predicted.
★ However, it is the best known algorithm and gives the least number of page
faults.
★ Hence, it is used as a performance measure criterion for other algorithms.
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the Optimal page
replacement policy. Assume that all the page frames are initially empty. What is the total number of
page faults that will occur while processing the page reference string given below--

4 , 7, 6, 1, 7, 6, 1, 2, 7, 2

Total number of page faults occurred = 5


[Q]

A system uses 3 page frames for storing process pages in main memory. It uses the
Optimal page replacement policy. Assume that all the page frames are initially empty.
What is the total number of page faults that will occur while processing the page
reference string given below--

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the Optimal page
replacement policy. Assume that all the page frames are initially empty. What is the total number of
page faults that will occur while processing the page reference string given below--

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

Total number of page faults occurred = 9


LRU(Least Recently Used ) Page
Replacement
★ As the name suggests, this algorithm works on the principle of “Least

Recently Used“.

★ It replaces the page that has not been referred by the CPU for the longest time.
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently
Used (LRU) page replacement policy. Assume that all the page frames are initially empty. What is the
total number of page faults that will occur while processing the page reference string given below-

4 , 7, 6, 1, 7, 6, 1, 2, 7, 2

Total number of page faults occurred = 6


[Q]

A system uses 3 page frames for storing process pages in main memory. It uses the
LRU replacement policy. Assume that all the page frames are initially empty. What is
the total number of page faults that will occur while processing the page reference
string given below--

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
[Q] A system uses 3 page frames for storing process pages in main memory. It uses the LRU
replacement policy. Assume that all the page frames are initially empty. What is the total number of
page faults that will occur while processing the page reference string given below--

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1

Total number of page faults occurred = 12


Demand Segmentation

Operating system also uses demand segmentation, which is similar to demand paging.
Operating system to uses demand segmentation where there is insufficient hardware
available to implement ‘Demand Paging’.

The segment table has a valid bit to specify if the segment is already in physical memory or
not. If a segment is not in physical memory then segment fault results, which traps to the
operating system and brings the needed segment into physical memory, much like a page
fault.

Demand segmentation allows for pages that are often referenced with each other to be
brought into memory together, this decreases the number of page faults.

Another space server would be to keep some of a segment’s page tables on disk and swap
them into memory when needed.
Thrashing
● Thrashing : The processor spends most of its time swapping process (process
page faults) rather than executing user instructions. (or)

● Because of thrashing, the CPU utilization is going to be reduced or negligible.


Locality Model:
➔ A locality is a set of pages that are actively used together.
➔ The locality model states that as a process executes, it moves from one locality to another.
➔ A program is generally composed of several different localities which may overlap.
➔ For example, when a function is called, it defines a new locality where memory references
are made to the instructions of the function call, it’s local and global variables, etc.
Similarly, when the function is exited, the process leaves this locality.

Techniques to handle:
1. Working Set Model

2. Page Fault Frequency


● The most important property of the working set, then, is its size. If we compute the

working-set size, WSSi, for each process in the system, we can then consider that

D =ΣWSSi , where D is the total demand for frames

● Each process is actively using the pages in its working set. Thus, process i needs WSSi

frames.

● Now, if ‘m’ is the number of frames available in the memory, there are 2 possibilities:
○ If the total demand is greater than the total number of available frames (D> m),

thrashing will occur, because some processes will not have enough frames.

○ D<=m, then there would be no thrashing.


1. Working Set Model
● The working-set model is based on the assumption of locality.

● This model uses a parameter, , to define the working-set window.

● The idea is to examine the most recent page references.

● The set of pages in the most recent page references is the working set (Figure 9.20).

● If a page is in active use, it will be in the working set.


● If it is no longer being used, it will drop from the working set time units after its last

reference.

● Thus, the working set is an approximation of the program’s locality.


● For example, given the sequence of memory references shown in Figure 9.20, if = 10
memory references, then the working set at time t1 is {1, 2, 5,6, 7}. By time t2, the
working set has changed to {3, 4}.
2. Page Fault Frequency :
A more direct approach to handling thrashing is the one that uses the Page-Fault
Frequency concept.
● This model is straightforward. We have to act based on the page-fault frequency/rate and
allocate frames to each process based on that. What we do is set an upper bound(UB) and a
lower bound(LB) for this page-fault rate(R). For each process, we compare its page fault
rate(R) with the defined upper bound(UB) and lower bound(LB).
● If R > UB , then we can conclude that a process needs more frames to control this rate. This
means we need to allocate more frames to it to avoid thrashing. If no frames are available
then this process can be suspended until a sufficient number of frames are available. Once
this process is suspended, the allocated frames should be allocated to some other
high-paging process.
● If R < LB, we have more than a sufficient amount of frames for a process and some of it
can be allocated to some other processes.
● Using the R, UB, and LB, we can maintain a balance between the frame demands and frame
allocation.

You might also like