0% found this document useful (0 votes)
20 views

Computer Memory Management in Operating Systems

Computer memory is mainly classified in two ways 1. Internal memory 2.External memory

Uploaded by

G LAKSHMANA RAO
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Computer Memory Management in Operating Systems

Computer memory is mainly classified in two ways 1. Internal memory 2.External memory

Uploaded by

G LAKSHMANA RAO
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 179

Memory Management

What is Main Memory?

• The main memory is central to the operation of a


Modern Computer. Main Memory is a large array of
words or bytes, ranging in size from hundreds of
thousands to billions. Main memory is a repository of
rapidly available information shared by the CPU and I/O
devices. Main memory is the place where programs and
information are kept when the processor is effectively
utilizing them. Main memory is associated with the
processor, so moving instructions and information into
and out of the processor is extremely fast. Main
memory is also known as RAM (Random Access
Memory). This memory is volatile. RAM loses its data
when a power interruption occurs.
What is Memory Management?

• In a multiprogramming computer, the Operating


System resides in a part of memory, and the rest is used
by multiple processes. The task of subdividing the
memory among different processes is called Memory
Management. Memory management is a method in the
operating system to manage operations between main
memory and disk during process execution. The main
aim of memory management is to achieve efficient
utilization of memory.
Logical and Physical Address Space

• Logical Address Space: An address generated by the CPU is


known as a “Logical Address”. It is also known as a Virtual
address. Logical address space can be defined as the size of the
process. A logical address can be changed.

• Physical Address Space: An address seen by the memory unit


(i.e the one loaded into the memory address register of the
memory) is commonly known as a “Physical Address”. A Physical
address is also known as a Real address. The set of all physical
addresses corresponding to these logical addresses is known as
Physical address space. A physical address is computed by
MMU. The run-time mapping from virtual to physical addresses
is done by a hardware device Memory Management
Unit(MMU). The physical address always remains constant.
Why Memory Management is Required?

• Allocate and de-allocate memory before and


after process execution.
• To keep track of used memory space by
processes.
• To minimize fragmentation issues.
• To proper utilization of main memory.
• To maintain data integrity while executing of
process.
Memory management
• We have seen how CPU can be shared by a set
of processes
– Improve system performance
– Process management
• Need to keep several process in memory
– Share memory
• Learn various techniques to manage memory
– Hardware dependent
Memory management
What are we going to learn?
• Basic Memory Management: logical vs. physical
address space, protection, contiguous memory
allocation, paging, segmentation, segmentation
with paging.

• Virtual Memory: background, demand paging,


performance, page replacement, page replacement
algorithms (FCFS, LRU), allocation of frames,
thrashing.
Basic Hardware
• Main memory and the registers built into the processor
itself are the only storage that the CPU can access directly.
• The Instructions in execution and any data being used by
the Instructions must be in one of these direct-access
storage devices.
• Registers that are built into CPU generally accessible
within one cycle of CPU clock.
• The same cannot be said as Main Memory, which is
accessed via a transaction on the memory bus, memory
access may take many CPU cycles.
• Remedy is to add fast memory between the CPU and
Main Memory called a cache.
• Ensuring correct operation to protect the operating system from
access by user processes and also to protect user processes
from one another.
• The protection is provided by hardware.
• First of all we need to make sure that each process has a
separate memory space. To do this, we need the ability to
determine the range of legal addresses that the process may
access and to ensure that the process can access only these
legal addresses.
• We can provide this protection by using two registers
i) Base Register: Holds the smallest legal physical memory address.
ii) Limit Register: specifies the size of the range.
eg: If base register holds 300040 and the limit register is 120900
then the program can legally access all addresses from 300040
to 420939.
A base and a limit register define a logical address space
HW address protection with base and limit
registers Operating System
Address Binding

• Normally, a program resides on a disk as a binary executable file.


• To be executed, the program must be brought into memory and
placed within a process.
• Depending on the memory management in use, the program
may be moved between disk and memory during its execution.
• The processes on the disk that are waiting to be brought into
memory for execution form the input queue.
• As a process is executed, it accesses instructions and data from
memory. Eventually, the process terminates, and its memory
space is declared as available.
• Binding of Instructions and Data to Memory:
Address binding of instructions and data to memory addresses
can happen at three different stages
1. Compile time 2. Load time 3. Executable time
Multistep Processing of a User Program
Dynamic Loading

• To obtain better memory utilization, we can use dynamic


loading.
• The main program is loaded into memory and is executed .
• All routines are placed in a relocatable load format, with
dynamic loading a routine is not loaded until it is called.
Advantages:
• Better Memory space utilization.
• Performance of CPU Increases.
Dynamic Linking& Shared Libraries
• Some operating system supports Static Linking & some
supports Dynamic Linking.
• Linking feature is usually used with system libraries, such as
language subroutine libraries.
• The CPU links the dependent programs to main executing
program when it is needed.
Differences between Logical Address and Physical Address

Physical Address Logical Address


• Physical Address is the actual • It is a virtual address generated by
address of the data inside the the CPU while a program is running.
memory. The logical address is a It is referred to as a virtual address
virtual address and the program because it does not exist physically.
needs physical memory for its Using this address, the CPU access
execution. The user never deals the actual address or physical
with the Physical Address. The user address inside the memory, and
program generates the logical data is fetched from there.
address and is mapped to the • The hardware device called
physical address by the Memory Memory Management Unit (MMU)
Management Unit(MMU). is used for mapping this logical
address to the physical address.
• The set of all physical addresses The set of all logical addresses
corresponding to the logical generated by the CPU for a
addresses in the logical address program is called the logical
space is called the physical address address space.
space.
Comparison between Logical Address and Physical Address

• It is a virtual address.
• It is the actual location in the memory.
• It is visible to the user.
• It is not visible to the user.
• It is generated by the CPU.
• It is computed by the MMU.
• It is used by the user to access the physical address inside the
memory.
• It is not accessible directly by the user.
• The set of all logical addresses generated by the CPU is called the
logical address space.
• The set of all physical addresses corresponding to the logical addresses
in the logical address space is called the physical address space.
Swapping
• Swapping in OS is done to get access to data present in
secondary memory and transfer it to the main memory so
that it can be used by the application programs.
• A process must be loaded into memory in order to execute.
• If there is not enough memory available to keep all running
processes in memory at the same time, then some processes
that are not currently using the CPU may have their memory
swapped out to a fast local disk called the backing store.
• Swapping is the process of moving a process from memory
to backing store and moving another process from backing
store to memory. Swapping is a very slow process compared
to other operations .
Swapping of two processes using a disk as a backing store
Swap In:
• The method of removing a process from secondary
memory (Hard Drive) and restoring it to the main
memory (RAM ) for execution is known as the Swap
In method.
Swap Out:
• It is a method of bringing out a process from the main
memory(RAM) and sending it to the secondary
memory(hard drive) so that the processes with higher
priority or more memory consumption will be
executed known as the Swap Out method.
• Note:- Swap In and Swap Out method is done
by Medium Term Scheduler(MTS).
Advantages of Swapping in OS:
• Swapping in OS helps in achieving the goal of Maximum
CPU Utilization.
• Swapping ensures proper memory availability for every
process that needs to be executed.
• Swapping helps avoid the problem of process starvation
means a process should not take much time for execution
so that the next process should be executed.
• CPU can perform various tasks simultaneously with the help
of swapping so that processes do not have to wait much
longer before execution.
• Swapping ensures proper RAM(main memory) utilization.
• Swapping creates a dedicated disk partition in the hard
drive for swapped processes which is called swap space.
Contiguous Memory Allocation

• The main memory must accommodate both the OS and


the various user processes
• The memory is divided into two partitions: OS and the
user processes
• The operating system is allocated first, usually at either
low or high memory locations, and then the remaining
available memory is allocated to processes as needed
( The OS is usually loaded low, because that is where the
interrupt vectors are located)
• In contiguous memory allocation, each process is
contained in a single contiguous section of memory
Binding of Instructions and Data to
Memory
• Address binding of instructions and data to memory
addresses can happen at three different stages
– Compile time: If memory location known a priori,
absolute code can be generated must recompile code if
starting location changes
– Load time: Must generate relocatable code if memory
location is not known at compile time
– Execution time: If the process can be moved during its
execution from one memory segment to another
• Binding delayed until run time
• Need hardware support for address maps (e.g., base and limit
registers)
Memory-Management Unit (MMU)

• Hardware device that at run time maps virtual to physical address

• Many methods possible

• To start, consider simple scheme where the value in the relocation


register is added to every address generated by a user process at the
time it is sent to memory
– relocation register
– MS-DOS on Intel 80x86 used 4 relocation registers

• The user program deals with logical addresses (0 to max); it never sees
the real physical addresses (R to R+max)
– Say the logical address 25
– Execution-time binding occurs when reference is made to location in memory
– Logical address bound to physical addresses
Dynamic relocation using a
relocation register

14000

Relocatable
code
Contiguous Allocation
Multiple processes resides in memory
Contiguous Allocation

• Main memory usually divided into two


partitions:
– Resident operating system, usually held in low
memory
– User processes then held in high memory
– Each process contained in single contiguous
section of memory
Contiguous Allocation (Cont.)

• Multiple-partition allocation
– Divide memory into several Fixed size partition
– Each partition stores one process
– Degree of multiprogramming limited by number of
partitions
– If a partition is free, load process from job queue
– MFT (IBM OS/360)
Contiguous Allocation (Cont.)
• Multiple-partition allocation
– Variable partition scheme
– Hole – block of available memory; holes of various size are scattered
throughout memory
– Keeps a table of free memory
– When a process arrives, it is allocated memory from a hole large
enough to accommodate it
– Process exiting frees its partition, adjacent free partitions combined
– Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9
Hole process 8 process 10

process 2 process 2 process 2 process 2


Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?
Dynamic storage allocation problem
• First-fit: Allocate the first hole that is big enough

• Best-fit: Allocate the smallest hole that is big enough; must search entire list,
unless ordered by size
– Produces the smallest leftover hole

• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole
Hardware Support for Relocation
and Limit Registers

• Relocation registers used to protect user processes from each other, and from changing
operating-system code and data
• Relocation register contains value of smallest physical address
• Limit register contains range of logical addresses – each logical address must be less
than the limit register
• Context switch
• MMU maps logical address dynamically
Fragmentation
What is Fragmentation?
When processes are moved to and from the main memory, the
available free space in primary memory is broken into smaller pieces.
This happens when memory cannot be allocated to processes
because the size of available memory is less than the amount of
memory that the process requires. Such blocks of memory stay
unused. This issue is called fragmentation.
Fragmentation is of the following two types:

1. External Fragmentation:
The total amount of free available primary is sufficient to reside a
process, but can not be used because it is non-contiguous. External
fragmentation can be decreased by compaction or shuffling of data in
memory to arrange all free memory blocks together and thus form
one larger free memory block.
2. Internal Fragmentation:
Internal fragmentation occurs when the
memory block assigned to the process is larger
than the amount of memory required by the
process. In such a situation a part of memory is
left unutilized because it will not be used by any
other process. Internal fragmentation can be
decreased by assigning the smallest partition of
free memory that is large enough for allocating to
a process.
What is Paging?

• A computer system can address and utilize more memory than the size
of the memory present in the computer hardware. This extra memory
is referred to as virtual memory. Virtual memory is a part of secondary
memory that the computer system uses as primary memory. Paging
has a important role in the implementation of virtual memory.
• The process address space is a set of logical addresses that every
process refers to in its code. Paging is a technique of memory
management in which the process address space is broken into blocks.
All the blocks are of the same size and are referred to as “pages”. The
size of a page is a power of 2 and its value is in the range of 512
bytes to 8192 bytes. The size of a process is measured in terms of the
number of pages.
• A similar division of the main memory is done into blocks of fixed size.
These blocks are known as “frames” and the size of a frame is the
same as that of a page to achieve optimum usage of the primary
memory and for avoiding external fragmentation.
Advantages of Paging:
• Paging decreases external fragmentation.
• Paging is easy to implement.
• Paging adds to memory efficiency.
• Since the size of the frames is the same as that of
pages, - swapping becomes quite simple.
• Paging is useful for fast accessing data.
Disadvantages of Paging:
• May cause Internal fragmentation
• Page tables consume additional memory.
• Multi-level paging may lead to memory reference
overhead.
Memory Protection
• Memory protection implemented by associating
protection bit with each frame to indicate if read-
only or read-write access is allowed
– Can also add more bits to indicate page execute-only,
and so on
• Valid-invalid bit attached to each entry in the page
table:
– “valid” indicates that the associated page is in the
process logical address space, and is thus a legal page
– “invalid” indicates that the page is not in the process
logical address space
– Or use page-table length register (PTLR)
• Any violations result in a trap to the kernel
• Insertion of an additional bit – a Valid/Invalid
bit, can help protect a paging process.
Whereas, we can achieve Paging Memory
protection by associating protection bits with
each page.
Valid (V) or Invalid (I) Bit in a Page Table
Segmentation

• Segmentation is a method of dividing the primary memory into


multiple blocks. Each block is called a segment and has a specific
length. Every segment has a starting address referred to as its base
address. The length of a segment determines the amount of free
memory available in the segment.
• The location of any data items stored in the segment can be
determined by the distance of the actual position of the data item
from the base address of the segment. This distance is called an
offset or displacement value. Simply put, when data is to be obtained
from a segment, then the actual address of data is computed as the
sum of the base address of the segment and the offset value.
• The offset value and the base address of a segment are both
specified in a program instruction itself.
Segmentation Architecture
Types of Segmentation

Segmentation can be divided into two types:


• Virtual Memory Segmentation:
Virtual Memory Segmentation divides the processes
into n number of segments. All the segments are not divided at a
time. Virtual Memory Segmentation may or may not take place at
the run time of a program.
• Simple Segmentation:
Simple Segmentation also divides the processes
into n number of segments but the segmentation is done all
together at once. Simple segmentation takes place at the run
time of a program. Simple segmentation may scatter the
segments into the memory such that one segment of the process
can be at a different location than the other(in a noncontiguous
manner).
Advantages of Segmentation:
• Provides protection within segments.
• Segments referencing multiple processes can
help achieve sharing.
• No internal fragmentation
• As compared to paging, segment tables use
lesser memory.
Disadvantages of Segmentation:
• Separation of free memory space into small
pieces can cause external fragmentation.
• It is costly.
Virtual Memory
Virtual memory is a technique that allows the execution of processes that
are not completely in main memory (physical memory)
Advantages:
• Programs can be larger than the physical memory
• Virtual memory allows processes to share files easily and to implement
shared memory
• Virtual memory involves the separation of logical memory as viewed by
users from physical memory. This separation allows an extremely large
virtual memory to be provided for programmers when only a smaller
physical memory is available.
• Virtual memory makes the task of programming much easier, because the
programmer no longer needs to worry about the amount of physical
memory available
• The virtual address space of a process refers to the logical (or virtual) view
of how a process is stored in memory
• Virtual address spaces that include free space are known as sparse address
spaces
Virtual Memory that is larger than Physical Memory
Demand Paging
Load the pages only when the process needs, instead of loading
entire program in physical memory at execution time. This technique is
known as Demand Paging and is commonly used in virtual memory
systems.
• With demand-paged virtual memory, pages are loaded only when they
are demanded during program execution. Pages that are never accessed
are thus never loaded into physical memory
• When we want to execute a process, we swap it into memory. Rather
than swapping the entire process into memory though, we use a lazy
swapper or pager
• A lazy swapper never swaps a page into memory unless that page will be
needed.
• Demand paging is a technique used in virtual memory systems where
pages enter main memory only when requested or needed by the CPU.
In demand paging, the operating system loads only the necessary pages
of a program into memory at runtime, instead of loading the entire
program into memory at the start.
• If a process tries to access a page that is not in
the main memory, then access to that page is
marked invalid and causes a page fault. (Page
not in main memory)
Steps in handling a page fault
1 We check an internal table (usually kept with the process control
block) for this process to determine whether the reference was a
valid or an invalid memory access.
2 If the reference was invalid, we terminate the process. If it was valid
but we have not yet brought in that page, we now page it in.
3 We find a free frame (by taking one from the free-frame list, for
example).
4 We schedule a disk operation to read the desired page into the newly
allocated frame.
5 When the disk read is complete, we modify the internal table kept
with the process and the page table to indicate that the page is now
in memory.
6 We restart the instruction that was interrupted by the trap. The
process can now access the page as though it had always been in
memory.
Page Replacement
In order to make the most use of virtual memory, we load several
processes into memory at the same time. Since we only load the pages
that are actually needed by each process at any given time, there are
frames to load many more processes in memory.
 If some process suddenly decides to use more pages and there aren't any
free frames available. Then there are several possible solutions to
consider:
 Adjust the memory used by I/O buffering, etc., to free up some frames for
user processes.
 Put the process requesting more pages into a wait queue until some free
frames become available.
 Swap some process out of memory completely, freeing up its page
frames.
 Find some page in memory that isn't being used right now, and swap that
page only out to disk, freeing up a frame that can be allocated to the
process requesting it. This is known as page replacement, and is the most
common solution. There are many different algorithms for page
Why Need Page Replacement Algorithms?

• Page Fault: A Page Fault occurs when a program running


in CPU tries to access a page that is in the address space
of that program, but the requested page is currently not
loaded into the main physical memory, the RAM of the
system.
• Since the actual RAM is much less than the virtual
memory the page faults occur. So whenever a page fault
occurs, the Operating system has to replace an existing
page in RAM with the newly requested page. In this
scenario, page replacement algorithms help the
Operating System in deciding which page to replace. The
primary objective of all the page replacement algorithms
is to minimize the number of page faults.
Page Replacement Algorithms in Operating Systems

• A page replacement in an operating system is


the process in which a page from the main
memory is replaced with a page from the
secondary memory. Page Replacement occurs
because of Page Faults. Page replacement
algorithms such as FIFO, Optimal page
replacement, LRU, LIFO, and Random page
replacement assist the operating system in
determining which page to replace. Let us start
with the definition of Page Replacement
Algorithms.
• Page Replacement Algorithms in OS refer to the techniques used by an
operating system to manage the memory allocation and deallocation
of the physical memory (RAM) of a computer. These algorithms are
used to determine which page in the physical memory should be
swapped out (removed) and which page should be brought in (loaded)
to be used by the operating system and the running processes.
• The goal of the page replacement algorithm is to reduce the number of
page faults (the number of times a process needs to access a page that
is not currently in physical memory) and to optimize memory usage.
• There are several page replacement algorithms, each with its own
advantages and disadvantages. The common Page Replacement
Algorithms are given below:
 FIFO Page Replacement Algorithm
 LRU Page Replacement Algorithm
 Optimal Page Replacement Algorithm
 LIFO Page Replacement Algorithm
FIFO Page Replacement Algorithm

• FIFO algorithm is the simplest of all the page replacement algorithms. In


this, we maintain a queue of all the pages that are in the memory
currently. The oldest page in the memory is at the front-end of the
queue and the most recent page is at the back or rear-end of the queue.
• Whenever a page fault occurs, the operating system looks at the front-
end of the queue to know the page to be replaced by the newly
requested page. It also adds this newly requested page at the rear-end
and removes the oldest page from the front-end of the queue.
Advantages
• Simple to understand and implement and does not cause more
overhead
Disadvantages
• Poor performance and doesn’t use the frequency of the last used time
and just simply replaces the oldest page.
• Suffers from Belady’s anomaly.
Example: Consider the page reference string
as 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames. Find the
number of page faults?
Initially, all of the slots are empty so page faults occur at 3,1,2.
Page faults = 3
When page 1 comes, it is in the memory so no page fault occurs.
Page faults = 3
When page 6 comes, it is not present and a page fault occurs. Since
there are no empty slots, we remove the front of the queue, i.e. 3.
Page faults = 4
When page 5 comes, it is also not present and hence a page fault
occurs. The front of the queue i.e 1 is removed.
Page faults = 5
When page 1 comes, it is not found in memory and again a page
fault occurs. The front of the queue i.e 2 is removed.
Page faults = 6
When page 3 comes, it is again not found in memory, a page fault
occurs, and page 6 is removed being on top of the queue
Total page faults = 7
Optimal Page Replacement Algorithm
Optimal page replacement is the best page replacement algorithm
as this algorithm results in the least number of page faults. In this
algorithm, the pages are replaced with the ones that will not be used for
the longest duration of time in the future. In simple terms, the pages
that will be referred farthest in the future are replaced in this algorithm.

Advantages
• Excellent efficiency and Less complexity
• Easy to use and understand
• Simple data structures can be used to implement
• Used as the benchmark for other algorithms
Disadvantages
• More time consuming and difficult for error handling
• Need future awareness of the programs, which is not possible every
time
Example:
Perform Optimal Page replacement algorithm using
page reference string 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames . Find the number of page
faults?
Initially, since all the slots are empty, pages 3, 1, 2 cause a page
fault and take the empty slots.
Page faults = 3
When page 1 comes, it is in the memory and no page fault occurs.
Page faults = 3
When page 6 comes, it is not in the memory, so a page fault occurs
and 2 is removed as it is not going to be used again.
Page faults = 4
When page 5 comes, it is also not in the memory and causes a page
fault. Similar to above 6 is removed as it is not going to be used
again.
page faults = 5
When page 1 and page 3 come, they are in the memory so no page
fault occurs.
Total page faults = 5
Least Recently Used (LRU) Page Replacement Algorithm
• The least recently used page replacement algorithm keeps the track of
usage of pages over a period of time. This algorithm works on the basis of
the principle of locality of a reference which states that a program has a
tendency to access the same set of memory locations repetitively over a
short period of time. So pages that have been used heavily in the past are
most likely to be used heavily in the future also.
• In this algorithm, when a page fault occurs, then the page that has not
been used for the longest duration of time is replaced by the newly
requested page.
Advantages
• It is open for full analysis
• Doesn’t suffer from Belady’s anomaly
• Often more efficient than other algorithms
Disadvantages
• It requires additional data structures to be implemented and More
complex
Example: Let’s see the performance of the LRU on the
reference string of 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames:
Initially, since all the slots are empty, pages 3, 1, 2 cause a page fault
and take the empty slots.
Page faults = 3
When page 1 comes, it is in the memory and no page fault occurs.
Page faults = 3
When page 6 comes, it is not in the memory, so a page fault occurs and
the least recently used page 3 is removed.
Page faults = 4
When page 5 comes, it again causes a page fault and page 1 is
removed as it is now the least recently used page.
Page faults = 5
When page 1 comes again, it is not in the memory and hence page 2 is
removed according to the LRU.
Page faults = 6
When page 3 comes, the page fault occurs again and this time page 6 is
removed as the least recently used one.
Total page faults = 7
Thrashing
• Thrashing is the state of a process where there is high paging
activity. A process that is spending more time paging than
executing is said to be thrashing.
• This results in low CPU utilization, and the operating system
responds by attempting to increase the degree of
multiprogramming.
Page Replacement Algorithms and Effect of Thrashing

• To deal with page faults, the operating system uses either the
global frames replacement algorithm or the local frames
replacement algorithm to bring in enough pages in the main
memory. Let’s see how these replacement algorithms affect
thrashing.
1. Global Page Replacement
• The Global Page replacement has the ability to bring any
page, and once Thrashing in the operating system is detected,
it attempts to bring more pages. As a result, no process can
acquire enough frames, and the thrashing in the operating
system. To summarise, when Thrashing occurs in the
operating system, the global page replacement technique is
ineffective.
2. Local Page Replacement
• In contrast to the Global Page Replacement, the Local
Page Replacement will select pages that are part of that
process. As a result, there is a chance that the operating
system’s thrashing will be reduced. As previously
demonstrated, there are several disadvantages to using
Local Page replacement. As a result, local page
replacement is simply an alternative to global page
replacement.
Causing of Thrashing in OS

• Thrashing in the operating system has an effect on the


execution performance of the operating system. Furthermore,
thrashing degrades the operating system’s performance.

• When CPU usage is low, the process scheduling mechanism


attempts to load many processes into memory, increasing the
degree of Multiprogramming. In this case, there are more
processes in memory than there are memory frames
available. When a high-priority process enters memory, if the
frame is not already occupied by another process, the other
process will be moved to secondary storage, and the free
frame will be assigned to a higher-priority process.
How to Overcome Thrashing in OS?

• Given below are the Techniques to prevent


thrashing in OS:
1) Working Set Model
• This model is based on Locality Model.
• The basic principle states that if we give a
process enough frames to accommodate its
current locality, it will only fail when it moves
to a new locality. However, if the allocated
frames are smaller than the size of the current
locality, the process will thrash.
• 2) Page Fault Frequency
• The Page-Fault Frequency concept is a more
direct approach to dealing with thrashing.
Paging Model of Logical and Physical Memory

page table to translate logical to physical


addresses
Address Translation Scheme

• Address generated by CPU is divided into:


– Page number (p) – used as an index into a page table
• which contains base address of each page in physical memory
– Page offset (d) – offset within a page
• combined with base address to define the physical memory address that is
sent to the memory unit
offset
page
page number page offset

p d
m-n n

– For given logical address space 2m and page size 2n


Paging Hardware
Paging Example
Logical address 0
(0*4+0)
Logical address = 16 Physical address:
Page size=4 (5*4+0)=20
Physical memory=32
Logical address 3
(0*4+3)
Physical address:
(5*4+0)=23

Logical address 4
User’s view (1*4+0)
Physical address:
(6*4+0)=24
Run time address binding
Logical address 13
(3*4+1)
Physical address:
(2*4+1)
n=2 and m=4 32-byte
memory and 4-byte pages
Paging
• External fragmentation??
• Calculating internal fragmentation
– Page size = 2,048 bytes
– Process size = 72,766 bytes
– 35 pages + 1,086 bytes
– Internal fragmentation of 2,048 - 1,086 = 962 bytes
• So small frame sizes desirable?
– But increases the page table size
– Poor disk I/O
– Page sizes growing over time
• Solaris supports two page sizes – 8 KB and 4 MB
• User’s view and physical memory now very different
– user view=> process contains in single contiguous memory space
• By implementation process can only access its own memory
– protection
• Each page table entry 4 bytes (32 bits) long
• Each entry can point to 232 page frames
• If each frame is 4 KB
• The system can address 244 bytes (16TB) of
physical memory

Virtual address space 16MB.


Page table size?
• Process P1 arrives
• Requires n pages => n frames must be
available
• Allocate n frames to the process P1
• Create page table for P1
Frame table Free Frames

User’s view
System’s view

RAM RAM
Before allocation After allocation
Implementation of Page Table
• For each process, Page table is kept in main memory
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page
table
• In this scheme every data/instruction access requires two
memory accesses
– One for the page table and one for the data / instruction
• The two memory access problem can be solved by the use
of a special fast-lookup hardware cache called associative
memory or translation look-aside buffers (TLBs)
Associative memory
Associative Memory
• Associative memory – parallel search
Page # Frame #

• Address translation (p, d)


– If p is in associative register, get frame # out
– Otherwise get frame # from page table in memory
Implementation of Page Table
• For each process, Page table is kept in main memory
• Page-table base register (PTBR) points to the page table
• Page-table length register (PTLR) indicates size of the page table
• In this scheme every data/instruction access requires two memory accesses
– One for the page table and one for the data / instruction
• The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffers (TLBs)

• TLBs typically small (64 to 1,024 entries)

• On a TLB miss, value is loaded into the TLB for faster access next time
– Replacement policies must be considered (LRU)
– Some entries can be wired down for permanent fast access

• Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies each
process (PID) to provide address-space protection for that process
– Otherwise need to flush at every context switch
Paging Hardware With TLB
Effective Access Time
• Associative Lookup =  time unit
– Can be < 10% of memory access time

• Hit ratio = 
– Hit ratio – percentage of times that a page number is found in the associative
registers; ratio related to size of TLB

• Consider  = 80%,  = 20ns for TLB search, 100ns for memory access

• Effective Access Time (EAT)


EAT = (100 + )  + (200 + )(1 – )

• Consider  = 80%,  = 20ns for TLB search, 100ns for memory access
– EAT = 0.80 x 120 + 0.20 x 220 = 140ns

• Consider better hit ratio ->  = 98%,  = 20ns for TLB search, 100ns for
memory access
– EAT = 0.98 x 120 + 0.02 x 220 = 122ns
Memory Protection
• Memory protection implemented by associating protection
bit with each frame to indicate if read-only or read-write
access is allowed
– Can also add more bits to indicate page execute-only, and so on

• Valid-invalid bit attached to each entry in the page table:


– “valid” indicates that the associated page is in the process’ logical
address space, and is thus a legal page
– “invalid” indicates that the page is not in the process’ logical
address space
– Or use PTLR

• Any violations result in a trap to the kernel


Valid (v) or Invalid (i)
Bit In A Page Table
14 bit address space (0 to 16383)
Page size 2KB
Process P1 uses only 0 to 10468
P2
Page 0 P1
Page 1
Page 2
Page 3

Internal fragmentation Use of PTLR (length)


Shared Pages Example
• System with 40 users
– Use common text editor
• Text editor contains 150KB code 50KB data (page size 50KB)
– 8000KB!
• Shared code
– One copy of read-only (reentrant) code shared among
processes (i.e., text editors, compilers, window systems)
• Code never changes during execution
• Only one copy of the editor in the memory
• Total memory consumption
– 40*50+150=2150KB
Shared Pages Example
Data share: example
writer.c
reader .c
int main()
{ int main()
{
int shmid,f,key=3,i,pid;
char *ptr; int shmid,f,key=3,i,pid;
char *ptr;
shmid=shmget((key_t)key,100,IPC_CREAT|0666);
ptr=shmat(shmid,NULL,0); shmid=shmget((key_t)key,100,IPC_CREAT|0666);
printf("shmid=%d ptr=%u\n",shmid, ptr); ptr=shmat(shmid,NULL,0);
strcpy(ptr,"hello"); printf("shmid=%d ptr=%u\n",shmid, ptr);
i=shmdt((char*)ptr); printf("\nstr %s\n",ptr);
}
}

ptr
Shared
memory
Structure of the Page Table
• Memory requirement for page table can get huge using straight-forward
methods
– Consider a 32-bit logical address space as on modern computers
– Page size of 4 KB (212)
– Page table would have 1 million entries 220 (232 / 212)
– If each entry is 4 bytes -> 4 MB of physical address space / memory for page table
alone
• That amount of memory used to cost a lot
• Don’t want to allocate that contiguously in main memory

• Hierarchical Paging

• Hashed Page Tables

• Inverted Page Tables


Hierarchical Page Tables
• Break up the page table into multiple
pages

• We then page the page table

• A simple technique is a two-level page


table
Two-Level Page-Table Scheme
Two-Level Paging Example
• A logical address (on 32-bit machine with 4KB page size) is divided
into:
– a page number consisting of 20 bits
– a page offset consisting of 12 bits
• Since the page table is paged, the page number is further divided
into:
– a 10-bit page number
– a 10-bit page offset
• Thus, a logical address is as follows:
page number page offset
p1 p2 d
10 10 12

• where p1 is an index into the outer page table, and p2 is the


displacement within the page of the inner page table
Two-Level Page-Table Scheme
Each divided page table
size=210 *4bytes=4KB
=Page size

d
p1

p2

Pentium II
Address-Translation Scheme

Pentium II
64-bit Logical Address Space
• Even two-level paging scheme not sufficient
• If page size is 4 KB (212)
– Then page table has 252 entries
– If two level scheme, inner page tables could be 210 4-byte entries
– Address would look likepage
inner
outer page page offset
p1 p2 d
42 10 12

– Outer page table has 242 entries or 244 bytes


– One solution is to add a 2nd outer page table
– But in the following example the 2nd outer page table is still 234 bytes in
size
• And possibly 4 memory access to get to one physical memory location
Three-level Paging Scheme

SPARC (32 bits), Motorola 68030 support three and four level paging respectively
Hashed Page Tables

• Common in virtual address spaces > 32 bits

• The page number is hashed into a page table


– This page table contains a chain of elements hashing to the same
location

• Each element contains (1) the page number (2) the value of the
mapped page frame (3) a pointer to the next element

• Virtual page numbers are compared in this chain searching for a


match
– If a match is found, the corresponding physical frame is extracted
Hashed Page Table
Inverted Page Table
• Rather than each process having a page table and
keeping track of all possible logical pages, track all
frames

• One entry for each frame

• Entry consists the page number stored in that frame,


with information about the process that owns that page

• Decreases memory needed to store each page table,


– but increases time needed to search the table when a page
reference occurs
Inverted Page Table Architecture
64 bit UltraSPARC, PowerPC,

Address space ID
Segmentation
• Memory-management scheme that supports user view of memory
• A program is a collection of segments
– A segment is a logical unit such as:
main program Compiler generates the
procedure segments
Loader assign the seg
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
User’s View of a Program
User specifies each address
by two quantities
(a) Segment name
(b) Segment offset

Logical address contains the


tuple
<segment#, offset>

• Variable size segments without order


• Length=> purpose of the program
• Elements are identified by offset
Logical View of Segmentation
Logical address <segment-number, offset>
1

4
1

3 2
4

Logical
address
space user space physical memory space

• Long term scheduler finds and allocates memory for all segments of a program
• Variable size partition scheme
Memory image
Executable file and virtual address
Symbol table
Name address
SQR 0
a.out
SUM 4 Virtual address
space

Paging view
0 Load 0
4 ADD 4

Segmentation view
<CODE, 0> Load <ST,0>
<CODE, 2> ADD <ST,4>
Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number, offset>
• Segment table – maps two-dimensional logical address to
physical address;
• Each table entry has:
– base – contains the starting physical address where the segments
reside in memory
– limit – specifies the length of the segment
• Segment-table base register (STBR) points to the segment
table’s location in memory
• Segment-table length register (STLR) indicates number of
segments used by a program;
segment number s is legal if s < STLR
Example of Segmentation
Segmentation Hardware
Example of Segmentation
Segmentation Architecture

• Protection
• Protection bits associated with segments
– With each entry in segment table associate:
• validation bit = 0  illegal segment
• read/write/execute privileges
• Code sharing occurs at segment level
• Since segments vary in length, memory allocation is a
dynamic storage-allocation problem
– Long term scheduler
– First fit, best fit etc
• Fragmentation
Segmentation with Paging

Key idea:
Segments are splitted into multiple pages

Each page is loaded into frames in the memory


Segmentation with Paging
• Supports segmentation with paging
– Each segment can be 4 GB
– Up to 16 K segments per process S(13) G(1) P(2)
– <selector(16), offset (32)>
– Divided into two partitions
• First partition of up to 8 K segments are private to process (kept in local descriptor
table LDT)
• Second partition of up to 8K segments shared among all processes (kept in global
descriptor table GDT)

• CPU generates logical address (six Segment Reg.)


– Given to segmentation unit
• Which produces linear addresses
– Physical address 32 bits Intel 80386
– Linear address given to paging unit
IBM OS/2
• Which generates physical address in main memory
• Paging units form equivalent of MMU
• Pages sizes can be 4 KB
Logical to Physical Address
Translation in Pentium

Page table=220
entries
Example: The Intel Pentium

8 bytes Segment register


Intel Pentium Segmentation
Pentium Paging Architecture
Virtual Memory
Background
• Code needs to be in memory to execute, but
entire program rarely used
– Error code, unusual routines, large data structures
• Entire program code not needed at same time
• Consider ability to execute partially-loaded
program
– Program no longer constrained by limits of physical
memory
– programs could be larger than physical memory
– More processes can be accommodated
Virtual Memory That is
Larger Than Physical Memory

Large virtual
space

Small memory
Classical paging
• Process P1 arrives
• Requires n pages => n frames must be
available
• Allocate n frames to the process P1
• Create page table for P1

Allocate < n frames


Background
• Virtual memory – separation of user logical memory from
physical memory
– Extremely large logical space is available to programmer
– Concentrate on the problem
• Only part of the program needs to be in memory for execution
– Logical address space can therefore be much larger than physical
address space
– Starts with address 0, allocates contiguous logical memory
– Physical memory
• Collection of frame

• Virtual memory can be implemented via:


– Demand paging
– Demand segmentation
Demand Paging
• Bring a page into memory only when it is needed

• Lazy swapper – never swaps a page into memory


unless page will be needed
– Swapper that deals with pages is a pager

• Less I/O needed, no unnecessary I/O


– Less memory needed
– More users
Valid address
• Page is needed  reference to it information is available
in PCB
– invalid reference  abort
– not-in-memory  bring to memory
Transfer of a Paged Memory to
Contiguous Disk Space
• When we want to execute
a process, swap in

• Instead of swap in entire


process, load page

• Pager
Page Table When Some Pages
Are Not in Main Memory

Pager loads few necessary pages in


memory
Valid-Invalid Bit
• With each page table entry a valid–invalid bit is associated
(v  in-memory – memory resident, i  not-in-memory)
• Initially valid–invalid bit is set to i on all entries
• Example of a page table snapshot:
Frame # valid-invalid bit
v
v
v
v
i page table

….

ii Disk
address
• During address translation, if valid–invalid bit in page table entry
is i  page fault
Page Fault
• If the page in not in memory, first reference to that page will trap to
operating system:
page fault

1. Operating system looks at PCB to decide:


– Invalid reference  abort
– Just not in memory (load the page)
2. Get empty frame
3. Swap page into frame via scheduled disk operation
4. Reset page table to indicate page now in memory
Set validation bit = v
5. Restart the instruction that caused the page fault
What Happens if There is no Free Frame?
• Example
– 40 frames in memory
– 8 processes each needs 10 pages
– 5 of them never used
• Two options
– Run 4 processes (10 pages)
– Run 8 processes (5 pages)
• Increase the degree of multiprogramming
– Over allocating memory

• Page fault
– No free frame
– Terminate? swap out? replace the page?

• Page replacement – find some page in memory, not really in use, page it out

– Performance – want an algorithm which will result in minimum number of page faults

• Same page may be brought into memory several times


Steps in Handling a Page Fault
Check
PCB
Pure Demand Paging
• Extreme case – start process with no pages in memory
– OS sets instruction pointer to first instruction of process, non-
memory-resident -> page fault
– Swap in that page
– Pure demand paging
• Actually, a given instruction could access multiple pages
(instruction + data) -> multiple page faults
– Pain decreased because of locality of reference
• Hardware support needed for demand paging
– Page table with valid / invalid bit
– Secondary memory (swap device with swap space)
– Instruction restart after page fault
Steps in the ISR
• In Demand Paging
1. Trap to the operating system
2. Save the user registers and process state
3. Determine that the interrupt was a page fault
4. Check that the page reference was legal and determine the location of the page on the disk
5. Get a free frame
6. Issue a read from the disk to a free frame:
1. Wait in a queue for this device until the read request is serviced
2. Wait for the device seek and/or latency time
3. Begin the transfer of the page to a free frame
7. While waiting, allocate the CPU to some other user
8. Receive an interrupt from the disk I/O subsystem (I/O completed)
9. Save the registers and process state of the running process
10. Determine that the interrupt was from the disk
11. Correct the page table and other tables to show page is now in memory
12. Wait for the CPU to be allocated to this process again
13. Restore the user registers, process state, and new page table, and then resume the interrupted
instruction
Performance of Demand Paging
Demand paging affects the performance of the computer systems

• Page Fault Rate 0  p  1


– if p = 0 no page faults
– if p = 1, every reference is a fault

• Effective Access Time (EAT)


EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in
+ restart overhead
)
Demand Paging Example
• Memory access time = 200 nanoseconds
• Average page-fault service time = 8 milliseconds

• EAT = (1 – p) x 200 + p (8 milliseconds)


= (1 – p ) x 200 + p x 8,000,000
= 200 + p x 7,999,800
• If one access out of 1,000 causes a page fault, then
EAT = 8.2 microseconds.
This is a slowdown by a factor of 40!!
• If want performance degradation < 10 percent
– 220 > 200 + 7,999,800 x p
20 > 7,999,800 x p
– p < .0000025
– < one page fault in every 400,000 memory accesses
Better utilization of swap space
Swap space
Allocation of Frames
• How do we allocate the fixed amount of
memory among various processes?

• Single user system


– Trivial
Allocation of Frames
• Each process needs minimum number of frames
• Minimum number is defined by the instruction set
• Page fault forces to restart the instruction
– Enough frames to hold all the pages for that instruction
• Example:
– Single address instruction (2 frames)
– Two address instruction (3 frames)
• Maximum of course is total frames in the system
• Two major allocation schemes
– fixed allocation
– proportional allocation
Fixed and proportional Allocation
• Equal allocation – m frames and n processes
– Each process gets m/n
• For example, if there are 100 frames (after allocating frames
for the OS) and 5 processes, give each process 20 frames
– Keep some as free frame buffer pool
• Unfair for small and large sized processes
• Proportional allocation – Allocate according to the size of
process
– Dynamic as degree of multiprogramming, process sizes
change m 64
si size of process pi s1 10
s2 127
S   si
10
m total number of frames a1  64 5
137
s 127
ai allocation for pi  i m a2  64 59
S 137
Priority Allocation

Allocation of frames
• Depends on multiprogramming level

• Use a proportional allocation scheme using


priorities along with size
Need For Page Replacement
P1

P2
Need For Page Replacement
P1

P2

PC
Basic Page Replacement

1. Find the location of the desired page on disk

2. Find a free frame:


- If there is a free frame, use it
- If there is no free frame, use a page replacement algorithm to select a
victim frame (of that process)
- Write victim frame to disk

3. Bring the desired page into the (newly) free frame; update the page and
frame tables

4. Continue the process by restarting the instruction that caused the trap

Note now potentially 2 page transfers for page fault – increasing Effective
memory access time
Page Replacement

5
5 6
6
Page Replacement

5 5
6

6
Evaluation
• Evaluate algorithm by running it on a particular string of
memory references (reference string) and computing the
number of page faults on that string
– String is just page numbers, not full addresses
– Repeated access to the same page does not cause a page fault
• Trace the memory reference of a process
0100, 0432, 0101, 0612, 0102, 0104, 0101, 0611, 0102
• Page size 100B
• Reference string 1, 4, 1, 6, 1, 6

• In all our examples, the reference string is


7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
Graph of Page Faults Versus
The Number of Frames
First-In-First-Out (FIFO) Algorithm

Associates a time with each frame when the page was brought
into memory

When a page must be replaced, the oldest one is chosen


Reference string:
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1

Limitation:
A variable is initialized early and constantly used
FIFO Page Replacement
3 frames (3 pages can be in memory at a time)

15 page faults

• How to track ages of pages?


• Just use a FIFO queue to hold all the pages in memory
• Replace the page at the head
• Insert at tail
Optimal Algorithm

• Replace page that will not be used for longest


period of time

• How do you know this?


– Can’t read the future

• Used for measuring how well your algorithm


performs
Optimal Page Replacement

9 page faults
Least Recently Used (LRU) Algorithm

• Use past knowledge rather than future


– Past is the proxy of future
• Replace page that has not been used in the most of the time
• Associate time of last use with each page

• 12 faults – better than FIFO but worse than OPT


• Generally good algorithm and frequently used
• But how to implement?
LRU Algorithm-Implementation
• Counter implementation
– CPU maintains a clock
– Every page entry has a Time of use;
• every time page is referenced, copy the clock into the time of
use
– When a page needs to be replaced, look at the “Time of use” to
find smallest value
• Search through table needed
Page table
Time of
Clock Use
CPU
0
1
address
2
3
LRU Algorithm-Implementation
• Stack implementation
• Keep a stack of page numbers in a double linked list form:
• Page referenced:
• move it to the top
• Victim page is the bottom page
LRU Approximation Algorithms
• LRU needs special hardware and still slow
• Reference bit
– With each page associate a bit, initially = 0
– When page is referenced bit set to 1
• Additional reference bit algorithm
– Record the reference bits in regular interval
– Keep a 8 bit string for each page in memory
– At regular interval, timer copies the reference bit
to the high order bit (MSB) of the string.
– Shift the other bits right side by one bit
Reset
reference LRU Approximation Algorithms
Period Period Period

0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
LRU Approximation Algorithms
• LRU needs special hardware and still slow
• Reference bit
– With each page associate a bit, initially = 0
– When page is referenced bit set to 1
• Additional reference bit algorithm
• Second-chance algorithm
– Generally FIFO, plus hardware-provided reference bit
– Clock replacement
– If page to be replaced has
• Reference bit = 0 -> replace it
• reference bit = 1 then:
– set reference bit 0, leave page in memory (reset the time)
– replace next page, subject to same rules
Second-Chance (clock) Page-Replacement Algorithm

{Reference bit, Dirty bit} combination


Counting Algorithms
• Keep a counter of the number of references
that have been made to each page
Least and most frequently used

• LFU Algorithm: replaces page with smallest


count

• MFU Algorithm: based on the argument that


the page with the smallest count was probably
just brought in and has yet to be used
Graph of Page Faults Versus
The Number of Frames
Increase in page frame decreases page fault rate?

FIFO Example –
Program - 5 pages,
3 frames of Memory allocated

1 2 3 4 1 2 5 1 2 3 4 5

9 1 1 1 4 4 4 5 5 5 5 5 5
faults 2 2 2 1 1 1 1 1 3 3 3
3 3 3 2 2 2 2 2 4 4
        
FIFO Example –
Program - 5 pages,
4 frames of Memory allocated
1 2 3 4 1 2 5 1 2 3 4 5

1 1 1 4 4 4 5 5 5 5 5 5
2 2 2 1 1 1 1 1 3 3 3
3 3 3 2 2 2 2 2 4 4

10 1 1 1 1 1 1 5 5 5 5 4 4
faults 2 2 2 2 2 2 1 1 1 1 5
3 3 3 3 3 3 2 2 2 2
4 4 4 4 4 4 3 3 3
         
Belady's Anomaly
# of Page Faults

Number of Frames
cs431-cotter 166
Belady’s Anomaly
• This most unexpected result is known as
Belady’s anomaly – for some page-
replacement algorithms, the page fault rate
may increase as the number of allocated
frames increases

• Is there a characterization of algorithms


susceptible to Belady’s anomaly?
Stack Algorithms
• Certain page replacement algorithms are more “well
behaved” than others
• (In the FIFO example), the problem arises because the set of
pages loaded with a memory allocation of 3 frames is not
necessarily also loaded with a memory allocation of 4 frames

• There are a set of paging algorithms whereby the set of pages


loaded with an allocation of m frames is always a subset of
the set of pages loaded with an allocation of m +1 frames.
This property is called the inclusion property

• Algorithms that satisfy the inclusion property are not subject


to Belady’s anomaly. FIFO does not satisfy the inclusion
property and is not a stack algorithm
LRU algorithm
6 5 3 6 4 3 9
3 6 4 3 9
5 3 6 4 3 In memory
3 frames
6 5 3 6 4
5 5 6
5
Outside
memory

3 6 4 3 9
5 3 6 4 3
In memory
4 frames 6 5 3 6 4
5 5 6
5
Global vs. Local Allocation
• Frames are allocated to various processes

• If process Pi generates a page fault


– select for replacement one of its frames
– select for replacement a frame from another process

• Local replacement – each process selects from only its own set of allocated
frames
– More consistent per-process performance
– But possibly underutilized memory

• Global replacement – process selects a replacement frame from the set of all
frames; one process can take a frame from another
– But then process execution time can vary greatly
– But greater throughput ----- so more common
• Processes can not control its own page fault rate
– Depends on the paging behavior of other processes
Thrashing
• If a process uses a set of “active pages”
– Number of allocated frames is less than that
• Page-fault
– Replace some “active” page
– But quickly need replaced “active” frame back
– Quickly a page fault, again and again
– Thrashing  a process is busy swapping pages in and out

• OS monitors CPU utilization


– If low? Increase the degree of multiprogramming
Thrashing
• Global page replacement
– Process enters new phase (subroutine call) execution
– Page fault
– Taking frames from other processes
• Replace “active” frames of other processes
– These processes start page fault
– These faulting processes wait on the device queue for disk
• Ready queue empty
– CPU utilization decreases
Disk

• CPU scheduler increases the degree of multiprogramming


– More page faults
– Drop in CPU utilization
• Page fault increases tremendously
Thrashing (Cont.)
Thrashing
• Solution
– Local replacement
– One process cannot steal frames from other
processes
• Provide a process as many frames as needed
– Able to load all active pages
– How do we know?
– Locality model
Demand Paging and Thrashing

• Why does demand paging work?


Locality model
– Process migrates from one locality to another
– Localities may overlap

• Allocate enough frames to a process to accommodate its locality

• Why does thrashing occur?


 size of locality > total allocated frames
– Limit effects by using local or priority page replacement
Locality In A Memory-Reference Pattern
Working-set model
Working-Set Model
•   working-set window  a fixed number of page references
Example: 10,000 instructions
• WSSi (working set of Process Pi) =
total number of pages referenced in the most recent  (varies
in time)
– if  too small will not encompass entire locality
– if  too large will encompass several localities
– if  =   will encompass entire program

• D =  WSSi  total demand frames


– Approximation of locality

• if D > m  Thrashing
• Policy if D > m, then suspend or swap out one of the processes
Page-Fault Frequency
• More direct approach than WSS
• Establish “acceptable” page-fault frequency (PFF) rate
and use local replacement policy
– If actual rate too low, process loses frame
– If actual rate too high, process gains frame

You might also like