0% found this document useful (0 votes)
4 views

OS Module 4 CEC

Uploaded by

garakriska
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

OS Module 4 CEC

Uploaded by

garakriska
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

OPERATING SYSTEMS MODULE 4

MEMORY MANAGEMENT
8.1 BACKGROUND
Main Memory
Basic Hardware structure of Main memory: Program must be brought (from disk) into main
memory and placed within a process for it to be run. Main-memory and registers are the only storage
that a CPU can access directly.
 Register access in one CPU clock.
 Main-memory can take many cycles.
 Cache sits between main-memory and CPU registers.
 Protection of memory required to ensure correct operation.
 A pair of base- and limit-registers used to define the logical (virtual) address as shown in below
figures.

A base and a limit-register define a logical-address space Hardware address protection with base and limit-registers

ADDRESS BINDING
Explain the multi-step processing of a user program with a neat block diagram.

Address binding of instructions to memory-addresses can happen at 3 different stages as shown in


below figure.
Compile Time: If memory-location is known in advance then absolute code can be generated. If
starting location changes, then we must recompile code

Dept. of CSE,CEC Page 1


OPERATING SYSTEMS MODULE 4

Load Time: Must generate re-locatable code if memory-location is not known at compile time.
Execution Time: Binding delayed until run-time if the process can be moved during its execution
from one memory-segment to another. So it needs hardware support for address maps (e.g. base
and limit-registers).

Multistep processing of a user-program

Dept. of CSE,CEC Page 2


OPERATING SYSTEMS MODULE 4

LOGICAL VERSUS PHYSICAL ADDRESS SPACE


Logical-address is generated by the CPU (also referred to as virtual-address). Physical-address is
the address seen by the memory-unit. The compile-time and load-time address-binding methods
generate identical logical and physical addresses. The execution-time address binding scheme
results in differing logical and physical addresses.
Differentiate between Logical address and Physical address
S. No. Logical address Physical address

1 An address generated by the CPU An address seen by the memory unit

2 The set of all logical addresses Set of all physical addresses mapped to the
generated by CPU for a program is corresponding logical addresses is referred as
called logical address space. Physical Address space

3 Logical address is generated by the Physical address is computed by Memory


CPU. Management Unit..
4 The user can view the logical The user can never view physical address of
address of a program. program

MMU (Memory-Management Unit)


Hardware device that maps virtual-address to physical-address is MMU. The value in the
relocation-register is added to every address generated by a user-process at the time it is sent to
memory. The user-program deals with logical-addresses; it never sees the real physical-addresses.

Dynamic relocation using a relocation-register

Dept. of CSE,CEC Page 3


OPERATING SYSTEMS MODULE 4

DYNAMIC LOADING & LINKING


***Write a short note on dynamic loading and Linking
Dynamic loading can be used to obtain better memory-space utilization. A routine is not loaded
until it is called. This works as follows:
1. Initially, all routines are kept on disk in a re-locatable-load format.
2. Firstly, the main-program is loaded into memory and is executed.
3. When a main-program calls the routine, the main-program first checks to see whether the
routine has been loaded.
4. If routine has been not yet loaded, the loader is called to load desired routine into memory.
5. Finally, control is passed to the newly loaded-routine.
Advantages:
1. An unused routine is never loaded.
2. Useful when large amounts of code are needed to handle infrequently occurring cases.
3. Although the total program-size may be large, the portion that is used (and hence loaded)
may be much smaller.
4. Does not require special support from the OS.
Dynamic Linking: Linking is postponed until execution-time. This feature is usually used with
system libraries, such as language subroutine libraries. A stub is included in the image for each
library-routine reference. The stub is a small piece of code used to locate the appropriate memory-
resident library-routine. When the stub is executed, it checks to see whether the needed routine is
already in memory. If not, the program loads the routine into memory. Stub replaces itself with the
address of the routine, and executes the routine. Thus, the next time that particular code-segment is
reached, the library-routine is executed directly, incurring no cost for dynamic-linking.
All processes that use a language library execute only one copy of the library code.
SHARED LIBRARIES
A library may be replaced by a new version, and all programs that reference the library will
automatically use the new one version info. is included in both program & library so that
programs won't accidentally execute incompatible versions.
8.2 SWAPPING
A process must be in memory to be executed. A process can be swapped temporarily out-of-memory
to a backing-store (secondary device) and then brought into memory for continued execution.
Backing-store is a fast disk which is large enough to accommodate copies of all memory-
images for all users. Roll out/Roll in is a swapping variant used for priority-based scheduling
algorithms.
Dept. of CSE,CEC Page 4
OPERATING SYSTEMS MODULE 4

 Lower-priority process is swapped out so that higher-priority process can be


loaded and executed.
 Once the higher-priority process finishes, the lower-priority process can be
swapped in and continued
Swapping depends upon address-binding:
1) If binding is done at load-time, then process cannot be easily moved to a different
location.
2) If binding is done at execution-time, then a process can be swapped into a
different memory- space, because the physical-addresses are computed during
execution-time.

Swapping of two processes using a disk as a backing-store

Major part of swap-time is transfer-time; i.e. total transfer-time is directly proportional to the
amount of memory swapped.
Disadvantages: Context-switch time is fairly high. If we want to swap a process, we must be sure
that it is completely idle. Two solutions: Never swap a process with pending I/O operation. Execute
I/O operations only into OS buffers.
8.3 CONTIGUOUS MEMORY ALLOCATION
Memory is usually divided into 2 partitions:
1. One for the resident OS.
2. One for the user-processes.
Each process is contained in a single contiguous section of memory.
Memory Mapping & Protection

Dept. of CSE,CEC Page 5


OPERATING SYSTEMS MODULE 4

Memory-protection means protecting OS from user-process and protecting user-processes from one
another. Memory-protection is done using
 Relocation-register: contains the value of the smallest physical-address.
 Limit-register: contains the range of logical-addresses.
Each logical-address must be less than the limit-register. The MMU maps the logical-address
dynamically by adding the value in the relocation-register. This mapped-address is sent to memory
as shown in below figure. When the CPU scheduler selects a process for execution, the dispatcher
loads the relocation and limit-registers with the correct values. Because every address generated
by the CPU is checked against these registers, we can protect the OS from the running-process. The
relocation-register scheme provides an effective way to allow the OS size to change dynamically.
Transient OS code: Code that comes & goes as needed to save memory-space and overhead for
unnecessary swapping.

Hardware support for relocation and limit-registers

Dept. of CSE,CEC Page 6


OPERATING SYSTEMS MODULE 4

MEMORY ALLOCATION
Two types of memory partitioning are:
1) Fixed-sized partitioning and
2) Variable-sized partitioning
Fixed-sized Partitioning (Static partition) The memory is divided into fixed-sized partitions
before the process enters main memory. The size of each partition may or may not be same. Each
partition may contain exactly one process. The degree of multiprogramming is bound by the number
of partitions (Limitation on number of processes). When a partition is free, a process is selected
from the input queue and loaded into the free partition. When the process terminates, the partition
becomes available for another process.
Variable-sized Partitioning ( Dynamic partition): Partitions are made as the process enters into
main memory. The OS keeps a table indicating which parts of memory are available and which
parts are occupied. A hole is a block of available memory. Normally, memory contains a set of
holes of various sizes. Initially, all memory is available for user-processes and considered one large
hole. When a process arrives, the process is allocated memory from a large hole. If we find the hole,
we allocate only as much memory as is needed and keep the remaining memory available to satisfy
future requests.
Three strategies used to select a free hole from the set of available holes.

1. First Fit
2. Best Fit
3. Worst Fit
Difference between First fit and Best fit algorithms

S. First Fit Algorithm Best Fit Algorithm


No.
1 Allocate the first hole in main memory that Allocate the smallest hole in main
is big enough to fit the requested page memory that is big enough to fit the
requested page
2 Searching time for free holes in main Searching time for free holes in main
memory is less memory is more
3 It is faster in operation It is slower in operation
4 Internal fragmentation is more. Internal fragmentation is less

Dept. of CSE,CEC Page 7


OPERATING SYSTEMS MODULE 4

First Fit
 Allocate the first hole that is big enough to fit the requested page.
 Searching can start either at the beginning of the set of holes or at the location where the
previous first-fit search ended.
Best Fit
 Allocate the smallest hole that is big enough to fit the requested page
 We must search the entire list, unless the list is ordered by size.
 This strategy produces the smallest leftover hole. Internal fragmentation is less.
Worst Fit
 Allocate the largest hole and fit the requested page.
 Again, we must search the entire list, unless it is sorted by size.
 This strategy produces the largest leftover hole. Internal fragmentation is more.
First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.
FRAGMAENTATION
What is fragmentation?
As processes are loaded and removed from main memory the free memory space is broken into
little pieces. After some time that processes cannot be allocated to memory because of smaller size
and memory block or partition remains unused. This problem is called fragmentation.
Two types of memory fragmentation:
1) Internal fragmentation and
2) External fragmentation
Internal Fragmentation
The general approach is to break the physical-memory into fixed-sized blocks and allocate memory
in units based on block size or process size as shown below:

Dept. of CSE,CEC Page 8


OPERATING SYSTEMS MODULE 4

The allocated(partitioned) memory to a process may be slightly larger than the requested-memory.
The difference between requested-memory and allocated-memory is called internal fragmentation
i.e. Unused memory that is internal to a partition.
External Fragmentation
External fragmentation occurs when there is enough total memory-space to satisfy a request but the
available-spaces are not in contiguous. (i.e. storage is fragmented into a large number of small
holes). For example in below figure we have 3 free holes of size say 9K, 5K and 6K. Let us consider
the next arriving process whose size = 20K. Memory partition cannot be allocated for this process,
even though the total available size of memory is more than the size of process. Since the available-
spaces are not in contiguous.

Both the first-fit and best-fit strategies for memory-allocation suffer from external fragmentation.
Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N blocks will be
lost to fragmentation. This property is known as the 50-percent rule.
Given the memory partitions of 200K, 700K, 500K, 300K, 100K, 400K. Apply first fit and best
fit to place 315K, 427K, 250K, 550K processes.
Solution: First Fit

Dept. of CSE,CEC Page 9


OPERATING SYSTEMS MODULE 4

Best Fit:

Worst Fit

Total internal fragmentation = 385K + 73K +150K = 568K


Available total free memory holes = 200K +300K + 100K =600K. It is not possible to allocate
memory for process whose size = 550K (No available memory partition whose size >= 550K)
External fragmentation due to non-availability of contiguous memory = 550K ie: the size of
process P4 = 550K.
Therefore the Best fit algorithm makes an efficient use of memory for the given processes.

2. Given the 5 memory partitions 100KB, 500KB, 200KB, 300KB and 600KB, how each of the
First fit, best fit and worst fit algorithms place processes of 212KB, 417KB, 112KB and 426KB
size. Which algorithm makes an efficient use of the memory?

Internal fragmentation = 288K + 88K + 183K = 559K


There is no external fragmentation, because the total available memory (400K) is < P4 process
size (426K).

Dept. of CSE,CEC Page 10


OPERATING SYSTEMS MODULE 4

Internal fragmentation = 83K + 88K + 88K + 174K = 433K


There is no external fragmentation

Internal fragmentation = 83K + 188K + 388K = 659K


There is no external fragmentation, because the total available memory (300K) is < P4 process
size (426K). That means, there is no free memory hole which can big enough to fit the P4 process.
Therefore the Best fit algorithm makes an efficient use of memory for the given processes

Dept. of CSE,CEC Page 11


OPERATING SYSTEMS MODULE 4

8.4 PAGING
What is paging?
Paging is a memory management scheme that permits the physical address space of a process to be
noncontiguous.
It avoids external fragmentation. It also solves the considerable problem of fitting memory chunks
of varying sizes onto the backing store.
Paging Hardware
Explain the concept of simple paging hardware.
Divide physical (Main) memory into fixed-sized blocks called frames and logical (secondary)
memory is broken into fixed size blocks called pages.
• When a process is to be executed, its pages are loaded into any available main memory-
frames from the backing-store. The backing-store is divided into fixed-sized blocks that are
of the same size as the memory-frames.
• Still have Internal fragmentation

The page-table contains the base-address (frame number on which page is loaded) of each page in
physical-memory. Address generated by CPU is the logical address divided into 2 parts:
1. Page-number(p) is used as an index to the page-table and
2. Offset(d) is combined with the base-address (frame number) to define the physical-address.
This physical-address is sent to the memory-unit to fetch the required data/instruction.

Dept. of CSE,CEC Page 12


OPERATING SYSTEMS MODULE 4

Paging model of logical and physical-memory

With supporting paging hardware, explain in detail concept of paging with an example for
32-byte memory with 4 byte pages with a process being 16 bytes. How many bits are reserved
for page number and page offset in the logical address. Suppose the logical address is 5,
calculate the corresponding physical address, after populating memory and page table.
For explanation refer above paging hardware topic
Example:
Given that page size = 4 byte. Therefore frame size = 4 byte.
Process size = 16 byte
Number of pages = 16/4 = 4 Pages
Main memory size = 32 byte
Number of frames in main memory = 32/4 =8
When CPU generates logical address it contains two parts:
Page Number and Offset field.
Number of pages = 4, so we need 2 bits to represent 4 different pages; 0,1,2, 3
Here the number of bits required for Offset field = number of bits required for page size = 2 bits
So the number of bits reserved for page no. = 2 bits and number of bits for offset = 2 bits. Therefore
CPU generates logical address of size 4 bits.
Suppose CPU generates a logical address = 5
Logical address (0101)
Page number (2) Offset (2)
01 01

Dept. of CSE,CEC Page 13


OPERATING SYSTEMS MODULE 4

The physical address corresponding to this logical address (5) is obtained by referring the following
memory and page table information.
In the above logical address page no = 01 = Page 1
Now the MMU refers the Page Table and finds the frame number corresponding to page 1. ie:
frame 6.
Since main memory contains 8 frames, we need 3 bits to represent 8 different frames, f0, f1….f7,
in physical address. Offset field = 2 bits
Physical address
Frame number (3) Offset (2)
110 01
That is the logical address 5 (page 1, offset 1) maps to physical address = (6 x 4) +1 = 25
Therefore the physical address corresponding to the logical address 5 = 11001 = 25

PROBLEMS WITH SIMPLE PAGING SCHEME


Mention the problems with simple paging

1. Problem with simple paging is that extra memory references to access the page table is
required to get the frame number corresponding to the page number.
2. Thus two memory accesses are needed to access a byte in main memory, one for the page
table entry and one for the byte.
3. In simple paging scheme memory access is slowed by a factor of 2.

Dept. of CSE,CEC Page 14


OPERATING SYSTEMS MODULE 4

TRANSLATION LOOKASIDE BUFFER


The problems with simple paging scheme can be solved using translation look-aside buffer (TLB)
paging scheme.
What is TLB? With neat diagram explain the concept of TLB.
A translation-Look-aside buffer (TLB) is an associative high speed memory cache that is used to
reduce the time taken to access a user memory location. It is a part of MMU. The TLB contains
only a few of the page-table entries.
Working:

A TLB consists of two parts: a) Page number) b) Frame number. TLB contains only a few of the
page-table entries. When a logical-address is generated by the CPU, its page-number is presented
(entered) to the TLB.
 If the page-number is found(TLB hit), its frame-number is immediately available and used
to access memory.
 If page-number is not found in TLB (TLB miss), a memory-reference to page table must
be made. The obtained frame-number can be used to access memory. In addition, we add
this page-number and frame-number to the TLB, so that they will be found quickly on the
next reference.
If the TLB is already full of entries, the OS must select one for replacement. Percentage of times
that a particular page-number is found in the TLB is called hit ratio.
Advantage: Search operation is fast.
Disadvantage: Hardware is expensive.

Dept. of CSE,CEC Page 15


OPERATING SYSTEMS MODULE 4

In the paging scheme with TLB, it takes 20ns to search the TLB and 100ns to access memory.
Find the effective access time and percentage slowdown in memory access time if
i. Hit ratio is 80%

ii. Hit ratio is 98%

Solution:
In the paging scheme with TLB, if we find the page in TLB, it takes 20ns to search the TLB and
100ns to access memory, and then a mapped memory access takes 120ns when the page number is
in the TLB.
If we fail to find the page number in TLB(20ns), then we must first access memory for the page
table to get the page number and corresponding frame number(100ns). Then access memory for the
desired byte in memory (100ns).
Thus total 220ns time required to access the byte if the page number is not in TLB. (TLB search
time in main memory page table access + In main memory desired byte access)
i) Effective memory Access time = 0.80 x 120 + 0.20 x 220 = 140ns
Percentage slow down in memory access time = (140-100) = 40%

ii) Effective memory Access time = 0.98 x 120 + 0.02 x 220 =122ns
Percentage slow down in memory access time = (122-100) = 22%

MEMORY PROTECTION
Memory-protection is achieved by protection-bits for each frame. The protection-bits are kept in
the page-table. One protection-bit can define a page to be read-write or read-only. Every reference
to memory goes through the page-table to find the correct frame-number. Firstly, the physical-
address is computed. At the same time, the protection-bit is checked to verify that no writes are
being made to a read-only page. An attempt to write to a read-only page causes a hardware-trap to
the OS (or memory-protection violation).

Valid Invalid Bit


This bit is attached to each entry in the page-table as shown below. Valid bit: The page is in the
process’ logical-address space. Invalid bit: The page is not in the process’ logical-address space.
Illegal addresses are trapped by use of valid-invalid bit. The OS sets this bit for each page to allow
or disallow access to the page

Dept. of CSE,CEC Page 16


OPERATING SYSTEMS MODULE 4

Valid(V) or Invalid (I) bit in page table


SHARED PAGES
Advantage of paging: Possible to share common code. Re-entrant code is non-self-modifying code,
it never changes during execution. Two or more processes can execute the same code at the same
time. Each process has its own copy of registers and data-storage to hold the data for the process's
execution. The data for 2 different processes will be different. Only one copy of the editor need be
kept in physical-memory as shown in below figure. Each user's page-table maps onto the same
physical copy of the editor, but data pages are mapped onto different frames.
Disadvantage: 1) Systems that use inverted page-tables have difficulty implementing shared-
memory

Sharing of code in paging environment

Dept. of CSE,CEC Page 17


OPERATING SYSTEMS MODULE 4

8.5 STRUCTURE OF THE PAGE TABLE


The 3 types of page table structures are:
1. Hierarchical Paging
2. Hashed Page-tables
3. Inverted Page-tables
HIERARCHICAL PAGING
***Explain Hierarchical Paging structure with example
Most computers support a large logical-address space (232 to 264). In these systems, the page-table
itself becomes excessively large. So divide the page-table into smaller pieces.
Two Level Paging Algorithm: The page-table itself is also paged as shown below:

Two level page table scheme

This is also known as forward-mapped page-table because address translation works from the
outer page-table inwards.

Address translation for a two-level 32-bit paging architecture

Consider the system with a 32-bit logical-address space and a page-size of 4 KB. A logical-address
is divided into 20-bit page-number and 12-bit page-offset. Since the page-table is paged, the page-

Dept. of CSE,CEC Page 18


OPERATING SYSTEMS MODULE 4

number is further divided into 10-bit page-number and 10-bit page-offset.Thus, a logical-address is
as follows:

HASHED PAGE TABLES


This approach is used for handling address spaces larger than 32 bits. The hash-value is the virtual
page-number. Each entry in the hash-table contains a linked-list of elements that hash to the same
location (to handle collisions). Each element consists of 3 fields: Virtual page-number, Value of the
mapped page-frame and Pointer to the next element in the linked-list.
The algorithm works as follows

Hashed page-table

The virtual page-number is hashed into the hash-table. The virtual page-number is compared with
the first element in the linked-list. If there is a match, the corresponding page-frame (field 2) is used
to form the desired physical-address. If there is no match, subsequent entries in the linked-list are
searched for a matching virtual page-number.
INVERTED PAGE TABLES
It has one entry for each real page of memory. Each entry consists of virtual-address of the page
stored in that real memory-location and information about the process that owns the page.
Each virtual-address consists of a triplet:
<process-id, page-number, offset>.
Each inverted page-table entry is a pair <process-id, page-number>
The algorithm works as follows:

Dept. of CSE,CEC Page 19


OPERATING SYSTEMS MODULE 4

Inverted page table

1. When a memory-reference occurs, part of the virtual-address, consisting of <process-id,


page-number>, is presented to the memory subsystem.
2. The inverted page-table is then searched for a match.
3. If a match is found, at entry i-then the physical-address < i, offset> is generated.
4. If no match is found, then an illegal address access has been attempted.
Advantage:
1) Decreases memory needed to store each page-table
Disadvantages:
1. Increases amount of time needed to search table when a page reference occurs.
2. Difficulty implementing shared-memory.

8.6 SEGMENTATION
Explain segmentation with an example.
Segmentation is a memory-management scheme that supports user-view of memory. A logical-
address space is a collection of segments. Each segments of various size, which has a name and a
length. The addresses specify both segment-name and offset within the segment.
Normally, the user-program is compiled, and the compiler automatically constructs segments
reflecting the input program.
For example: The code, Global variables, The heap, from which memory is allocated. The stacks
used by each thread.

Dept. of CSE,CEC Page 20


OPERATING SYSTEMS MODULE 4

Programmers view of a program

Hardware Support
Segment-table maps 2 dimensional user-defined addresses into one-dimensional physical-
addresses. In the segment-table, each entry has following 2 fields:
Segment-base contains starting physical-address where the segment resides in memory.
Segment-limit specifies the length of the segment

Segmentation hardware
A logical-address consists of 2 parts:
Segment-number(s) is used as an index to the segment-table. Offset(d) must be between 0 and the
segment-limit.
If offset is not between 0 & segment-limit, then we trap to the OS (logical-addressing attempt
beyond end of segment).
If offset is legal, then it is added to the segment-base to produce the physical-memory address

Dept. of CSE,CEC Page 21


OPERATING SYSTEMS MODULE 4

Consider the following segment table:


Segment Base Length(Limit)
0 219 600
1 2300 14
2 1327 580
3 1952 96

What are the physical addresses for the following logical addresses:
i. (0, 430) ii. (1, 10) iii. (2, 500) iv) 3, 400

Solution
i. (0, 430)
Segment = 0 and limit = 430
Therefore physical address = 219 + 430 = 649
ii. (1, 10)
Segment = 1 and limit = 10
Therefore physical address = 2300 + 10 = 2310
iii. (2, 500)
Segment = 2 and limit = 500
Therefore physical address = 1327 + 500 = 1827
iv. (3, 400)
Segment = 3 and limit value = 400 which is > limit value 96. So a reference to byte 400
of segment 3 would result in a trap to the OS, as this segment is only 96 bytes long,
thereby generating segment Fault error.
Consider the following segment table:

Segment Base Length(Limit)


0 330 124
1 876 211
2 111 99
3 498 302

What are the physical addresses for the following logical addresses:
i. (0, 9, 9) ii. (2, 78) iii. (1, 265) iv) (3, 222) v) (0, 111)

Solution
i. (0, 9, 9): Logical address is incorrect; since it contains only two parts as segment and
limit.
ii. (2, 78)
Segment = 2 and limit = 78
Therefore physical address = 111 + 78 = 189
Dept. of CSE,CEC Page 22
OPERATING SYSTEMS MODULE 4

iii. (1, 265)


Segment = 1 and limit = 265
Llimit value = 265 which is > limit value 211. So a reference to byte 265 of segment 1
would result in a trap to the OS, as the segment1 is only 211 bytes long, thereby
generating segment Fault error.
iv. (3, 222)
Segment = 3 and limit = 222
Therefore physical address = 498 + 222 = 720
v. (0, 111)
Segment = 0 and limit = 111
Therefore physical address = 330 + 111 = 441

Differentiate between segmentation and Paging


S. No. Paging Segementation
1 A page is of fixed block size. A segment is of variable size
2 Paging may lead to internal Segmentation may lead to external fragmentation
fragmentation
3 The user specified address is The user specifies each address by two quantities
divided by CPU into a page number a segment number and the offset (Segment limit)
and offset.
4 The hardware decides the page size The segment size is specified by the user
5 Paging involves a page table that Segmentation involves the segment table that
contains base address of each page contains segment number and offset (segment
length).

Dept. of CSE,CEC Page 23


OPERATING SYSTEMS MODULE 4

VIRTUAL MEMORY MANAGEMENT


Virtual Memory
What is virtual memory? How it can be implemented? What are its benefits?
Virtual Memory is a technique that allows the execution of processes that are not completely in
Memory.VM involves the separation of logical-memory as perceived by users from physical-
memory. This separation allows an extremely large VM to be provided for programmers when only
a smaller physical memory is available. VM makes the task of programming much easier.
Benefits of Virtual Memory:
1. More programs could be run at the same time.
2. Programmers could write for a large virtual-address space and need no longer use overlays.
3. Less I/O would be needed to load/swap programs into memory, so each user program would
run faster.
Virtual-memory can be implemented by:
i. Demand paging and
ii. Demand segmentation.
The virtual (or logical) address-space of a process refers to the logical (or virtual) view of how a
process is stored in memory. Physical-memory may be organized in page-frames and that the
physical page-frames assigned to a process may not be contiguous. It is up to the MMU to map
logical-pages to physical page-frames in memory.

Diagram showing virtual memory that is larger than physical-memory

Dept. of CSE,CEC Page 24


OPERATING SYSTEMS MODULE 4

DEMAND PAGING
********What is on demand paging?Explain demand paging in detail.
The process of loading the page into main memory on demand (whenever page fault occurs) is
known as demand paging.
 A demand-paging system is similar to a paging-system with swapping Processes reside in
secondary-memory (usually a disk).
 It is a method of virtual memory management, where it follows that a process begins
execution with none of its pages in main memory, and many page faults will occur until
most of a process's working set of pages are located in main memory.
 When we want to execute a process, we swap it into memory. Instead of swapping in a whole
process, lazy swapper brings only those necessary pages into main memory

The valid-invalid bit scheme can be used to distinguish between pages that are in memory and pages
that are on the disk.
 If the bit is set to valid, the associated page is both legal and in memory.
 If the bit is set to invalid, the page either is not valid (i.e. not in the logical-address space of
the process) or is valid but is currently on the disk
The hardware to support demand paging is i) Page table and ii) Secondary memory.
Page table mark an entry invalid through a valid-invalid bit. Secondary memory holds the pages
that are not in main memory.

Dept. of CSE,CEC Page 25


OPERATING SYSTEMS MODULE 4

Page-table when some pages are not in main-memory


Advantages:
i. Avoids reading into memory-pages that will not be used,
ii. Decreases the swap-time and
iii. Decreases the amount of physical-memory needed.
PAGE FAULT
******What is a page fault? With a supporting diagram explain the steps involved in handling
page fault.
A page fault is a type of exception raised by computer hardware when a running program (process)
tries to access a page that was not brought into main memory.
Procedure for handling the page-fault:

Steps in handling a page-fault

Dept. of CSE,CEC Page 26


OPERATING SYSTEMS MODULE 4

1. Check an internal-table to determine whether the reference was a valid or an invalid memory
access.
2. If the reference is invalid, we terminate the process. If reference is valid, but we have not
yet brought in that page, we now page it in.
3. Find a free-frame (by taking one from the free-frame list, for example).
4. Read the desired page into the newly allocated frame.
5. Modify the internal-table and the page-table to indicate that the page is now in memory.
6. Restart the instruction that was interrupted by the trap.
PURE DEMAND PAGING
What is pure demand paging?
It is technique in which, never bring pages into main memory until it is required.
• In the extreme case, we can start executing a process with no pages in memory.
• When the operating system sets the instruction pointer to the first instruction of the process,
which is on a non-memory-resident page, the process immediately faults for the page.
• After this page is brought into memory, the process continues to execute, faulting as
necessary until every page that it needs is in memory. At that point it can execute with no
more faults.
• This scheme is pure demand paging such that never bring a page into memory until it is
required.
Some programs may access several new pages of memory with each instruction, causing multiple
page-faults and poor performance.
Programs tend to have locality of reference, so this results in reasonable performance from demand
paging.

PERFORMANCE OF DEMAND PAGING


*****Discuss on the performance of demand paging
Demand paging can significantly affect the performance of a computer-system.
Let p be the probability of a page-fault (0≤p ≤1). if p = 0, no page-faults and if p = 1, every reference
is a fault.
The effective access time (EAT) = [(1 - p) * memory access] + [p * page-fault time]
A page-fault causes the following events to occur:
1. Trap to the OS.
2. Save the user-registers and process-state.
3. Determine that the interrupt was a page-fault.

Dept. of CSE,CEC Page 27


OPERATING SYSTEMS MODULE 4

4. Check that the page-reference was legal and determine the location of the page on the disk.
5. Issue a read from the disk to a free frame:
a). Wait in a queue for this device until the read request is serviced.
b). Wait for the device seek time.
c). Begin the transfer of the page to a free frame.
6. While waiting, allocate the CPU to some other user.
7. Receive an interrupt from the disk I/O subsystem (I/O completed).
8. Save the registers and process-state for the other user (if step 6 is executed).
9. Determine that the interrupt was from the disk.
10. Correct the page-table and other tables to show that the desired page is now in memory.
11. Wait for the CPU to be allocated to this process again.
12. Restore the user-registers, process-state, and new page-table, and then resume the
interrupted instruction.
COPY-ON-WRITE
***Explain copy-on write process in virtual memory.
Copy On Write technique allows the parent and child processes initially to share the same pages in
memory. If either process writes to a shared-page, a copy of the shared-page is created. Copy on
write allows more efficient process creation as only modified pages are copied.
For example:
Assume that the child process attempts to modify a page containing portions of the stack, with the
pages set to be copy-on-write. OS will then create a copy of this page, mapping it to the address
space of the child process. Child process will then modify its copied page & not the page belonging
to the parent process.

Before Process 1 modifies page C.

Dept. of CSE,CEC Page 28


OPERATING SYSTEMS MODULE 4

After Process 1 modifies page C

PAGE REPLACEMENT
Why do we need for Page Replacement in OS?
If we increase our degree of multiprogramming, we are over-allocating memory. While a user-
process is executing, a page-fault occurs. Then the OS determines where the desired page is residing
on the disk but then finds that there are no free frames on the free-frame list.
Then the operating System could:
 Terminate the user-process (Not a good idea).
 Swap out a process, freeing all its frames, and reducing the level of multiprogramming.
 Perform page replacement.

Need for page replacement

Dept. of CSE,CEC Page 29


OPERATING SYSTEMS MODULE 4

Basic Concepts of Page Replacement


If no frame is free, we find one that is not currently being used and free it
Page replacement takes the following steps:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a) If there is a free frame, use it.
b) If there is no free frame, use a page-replacement algorithm to select a victim-frame.
c) Write the victim-frame to the disk; change the page and frame-tables accordingly.
3. Read the desired page into the newly freed frame; change the page and frame-tables.
4. Restart the user-process.

Page replacement

What are the problems that occur in page replacement concept? How it can be overcome.
Problem: If no frames are free, 2 page transfers (1 out & 1 in) are required. This situation doubles
the page-fault service-time and increases the EAT accordingly.
Solution: Use a modify-bit (or dirty bit).
Each page or frame has a modify-bit associated with the hardware. The modify-bit for a page is set
by the hardware whenever any word is written into the page (indicating that the page has been
modified).
Working: When we select a page for replacement, we examine it‟s modify-bit.
If the modify-bit =1, the page has been modified. So, we must write the page to the disk.
If the modify bit = 0; the page has not been modified. So we need not write the page to the disk, it
is already there.
Dept. of CSE,CEC Page 30
OPERATING SYSTEMS MODULE 4

Advantage: Can reduce the time required to service a page-fault.


We must solve 2 major problems to implement demand paging:
Develop a Frame-allocation algorithm: If we have multiple processes in memory, we must decide
how many frames to allocate to each process.
Develop a Page-replacement algorithm: We must select the frames that are to be replaced.

PAGE REPLACEMENT ALGORITHMS


1) FIFO page replacement
2) Optimal page replacement
3) LRU page replacement (Least Recently Used)
To determine the number of page faults for a particular reference string and page-replacement
algorithm, we also need to know the number of page frames available.
As the number of frames available increases, the number of page faults decreases
FIFO Page Replacement
Each page is associated with the time when that page was brought into memory. When a page must
be replaced, the oldest page is chosen. We use a FIFO queue to hold all pages in memory.
When a page must be replaced, we replace the page at the head of the queue. When a page is brought
into memory, we insert it at the tail of the queue.
Example: Consider the following references string with frames initially empty;

 The first three references (7, 0, 1) cause page-faults and are brought into these empty frames.
 The next reference (2) replaces page 7, because page 7 was brought in first.
 Since 0 is the next reference and 0 is already in memory, we have no fault for this reference.
 The first reference to 3 results in replacement of page 0, since it is now first in line.
 This process continues till the end of string.
 There are fifteen page faults altogether.

Dept. of CSE,CEC Page 31


OPERATING SYSTEMS MODULE 4

Advantage: Easy to understand & program.


Disadvantages: Performance is not always good. A bad replacement choice increases the page-
fault rate (Belady's anomaly).
BELADY'S ANOMALY
For example consider the following reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
For this example, the number of page faults for four frames is ten which is greater than the number
of page faults for three frames is nine. This is the most unexpected result which is known as
Belady’s anamoly.

Page-fault curve for FIFO replacement on a reference string

****What is Belady's anomaly? Explain with an example.


For some page replacement algorithms, on increasing the number of page frames, the number of
page fault do not necessarily decrease, they may also increase. This is the most unexpected result
which is known as Belady’s anamoly in operating system.
Example: Consider the reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 with the number of frame
used is 3 and 4.
FIFO with 3 frames:
Frames 1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 1 4 4 4 5 5 5 5 5 5
2 2 2 2 1 1 1 1 1 3 3 3
3 3 3 3 2 2 2 2 2 4 4
No. of Page faults √ √ √ √ √ √ √ √ √

No. of page faults = 9

Dept. of CSE,CEC Page 32


OPERATING SYSTEMS MODULE 4

FIFO with 4 frames:


Frames 1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 1 1 1 1 5 5 5 5 4 4
2 2 2 2 2 2 2 1 1 1 1 5
3 3 3 3 3 3 3 2 2 2 2
4 4 4 4 4 4 4 3 3 3
No. of Page faults √ √ √ √ √ √ √ √ √ √

No. of page faults = 10


Conclusion: With 3 frames, No. of page faults = 9. With 4 frames, No. of page faults = 10.
Thus, Belady's anomaly has occurred, when number of frames is increased from 3 to 4.

OPTIMAL PAGE REPLACEMENT


Working principle:
Replace the page that will not be used for the longest period of time. That means we have to see
the page which is demanded as the last page in future demand. This is used mainly to solve the
problem of Belady‟s Anamoly. This has the lowest page-fault rate of all algorithms.
Consider the following reference string:

Optimal page-replacement algorithm


 The first three references cause faults that fill the three empty frames.
 The reference to page 2 replaces page 7, because page 7 will not be used until reference is
18.
 The page 0 will be used at 5, and page 1 at 14.
 With only nine page-faults, optimal replacement is much better than a FIFO algorithm,
which results in fifteen faults.
Advantage: Guarantees the lowest possible page-fault rate for a fixed number of frames.
Disadvantage: 1) Difficult to implement, because it requires future knowledge of the reference
string.

Dept. of CSE,CEC Page 33


OPERATING SYSTEMS MODULE 4

The key difference between FIFO and OPT:


FIFO uses the time when a page was brought into memory. OPT uses the time when a page is to be
used.
LRU PAGE REPLACEMENT (Least Recently Used)
Working principle:
Replace the page that has not been used for the longest period of time.
Each page is associated with the time of that page's last use
Example: Consider the following reference string:

LRU page-replacement algorithm


 The first five faults are the same as those for optimal replacement.
 When the reference to page 4 occurs, LRU sees that of the three frames, page 2 was used
least recently. Thus, the LRU replaces page 2.
 The LRU algorithm produces twelve page faults.
Two methods of implementing LRU:
1.Counters
 Each page-table entry is associated with a time-of-use field.
 A counter(or logical clock) is added to the CPU.
 The clock is incremented for every memory-reference.
 Whenever a reference to a page is made, the contents of the clock register are copied to the
time-of-use field in the page-table entry for that page.
 We replace the page with the smallest time value.
Stack
 Keep a stack of page-numbers as shown in below figure.
 Whenever a page is referenced, the page is removed from the stack and put on the top.
 The most recently used page is always at the top of the stack. The least recently used page
is always at the bottom.
 Stack is best implement by a doubly linked-list.

Dept. of CSE,CEC Page 34


OPERATING SYSTEMS MODULE 4

Advantage:
1) Does not suffer from Belady's anomaly.
Disadvantage:
1) Few computer systems provide sufficient hardware support for true LRU page replacement.
Both LRU & OPT are called stack algorithms.

Use of a stack to record the most recent page references


LRU-Approximation Page Replacement
Some systems provide a reference bit for each page. Initially, all bits are cleared (to 0) by the OS.
As a user-process executes, the bit associated with each page referenced is set (to 1) by the
hardware. By examining the reference bits, we can determine which pages have been used and
which have not been used. This information is the basis for many page-replacement algorithms that
approximate LRU replacement.
Additional-Reference-Bits Algorithm
We can gain additional ordering information by recording the reference bits at regular intervals.
A 8-bit byte is used for each page in a table in memory. At regular intervals, a timer-interrupt
transfers control to the OS. The OS shifts the reference bit for each page into the high-order bit of
its 8-bit byte. These 8-bit shift registers contain the history of page use, for the last eight time
periods.
Examples:
00000000 - This page has not been used in the last 8 time units (800 ms). 11111111 - Page has been
used every time unit in the past 8 time units.
11000100- has been used more recently than 01110111. The page with the lowest number is the
LRU page, and it can be replaced. If numbers are equal, FCFS is used.

Dept. of CSE,CEC Page 35


OPERATING SYSTEMS MODULE 4

SECOND-CHANCE ALGORITHM
The number of bits of history included in the shift register can be varied to make the updating as
fast as possible. In the extreme case, the number can be reduced to zero, leaving only the reference
bit itself. This algorithm is called the second-chance algorithm. This is the variant of basic FIFO
replacement algorithm.
Procedure:
 Initially all reference bits are set to 0 and any page hit results in corresponding reference bit
to set to 1
 When a page has been selected for replacement, we inspect its reference bit
 If reference bit = 0, we proceed to replace this page.
 If reference bit = 1, we give the page a second chance & move on to select next FIFO page.
 When a page gets a second chance, its reference bit is cleared, and its arrival time is reset.

A circular queue can be used to implement the second-chance algorithm as shown in above figure.

Dept. of CSE,CEC Page 36


OPERATING SYSTEMS MODULE 4

 A pointer (that is, a hand on the clock) indicates which page is to be replaced next.
 When a frame is needed, the pointer advances until it finds a page with a 0 reference bit.
 As it advances, it clears the reference bits.
 Once a victim page is found, the page is replaced, and the new page is inserted in the circular
queue in that position.
Enhanced Second-Chance Algorithm
We can enhance the second-chance algorithm by considering Reference bit and modify-bit.
We have following 4 possible classes:
1) (0, 0) neither recently used nor modified -best page to replace.
2) (0, 1) not recently used hut modified-not quite as good, because the page will need to be
written out before replacement.
3) (1, 0) recently used but clean-probably will be used again soon.
4) (1, 1) recently used and modified -probably will be used again soon, and the page will be
need to be written out to disk before it can be replaced.
Each page is in one of these four classes. When page replacement is called for, we examine the class
to which that page belongs. We replace the first page encountered in the lowest nonempty class.
Counting-Based Page Replacement
1.LFU page-replacement algorithm
Working principle: The page with the smallest count will be replaced. The reason for this selection
is that an actively used page should have a large reference count.
Problem:
When a page is used heavily during initial phase of a process but then is never used again. Since it
was used heavily, it has a large count and remains in memory even though it is no longer needed.
Solution:
Shift the counts right by 1 bit at regular intervals, forming an exponentially decaying average usage
count.
MFU (Most Frequently Used) page-replacement algorithm
Working principle: The page with the smallest count was probably just brought in and has yet to
be used.

Dept. of CSE,CEC Page 37


OPERATING SYSTEMS MODULE 4

EXERCISE PROBLEMS

1) Consider the page reference string: 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1, for a memory with


3 frames, how many page faults would occur for i) LRU algorithm ii) FIFO algorithm and
iii) Optimal page replacement algorithm? Which is the most efficient among them?
Solution:
LRU with 3 frames:
Frames 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
1 7 7 7 2 2 2 2 4 4 4 0 0 0 1 1 1 1 1 1 1
2 0 0 0 0 0 0 0 0 3 3 3 3 3 3 0 0 0 0 0
3 1 1 1 3 3 3 2 2 2 2 2 2 2 2 2 7 7 7
No. of Page faults √ √ √ √ √ √ √ √ √ √ √ √

No of page faults = 12
FIFO with 3 frames:
Frames 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
1 7 7 7 2 2 2 2 4 4 4 0 0 0 0 0 0 0 7 7 7
2 0 0 0 0 3 3 3 2 2 2 2 2 1 1 1 1 1 0 0
3 1 1 1 1 0 0 0 3 3 3 3 3 2 2 2 2 2 1
No. of Page faults √ √ √ √ √ √ √ √ √ √ √ √ √ √ √

No of page faults = 15
Optimal with 3 frames:
Frames 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
1 7 7 7 2 2 2 2 2 2 2 2 2 2 2 2 2 2 7 7 7
2 0 0 0 0 0 0 4 4 4 0 0 0 0 0 0 0 0 0 0
3 1 1 1 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1
No. of Page faults √ √ √ √ √ √ √ √ √

No of page faults = 9
Conclusion: The optimal page replacement algorithm is most efficient among three algorithms, as
it has lowest page faults i.e. 9.

Dept. of CSE,CEC Page 38


OPERATING SYSTEMS MODULE 4

2) Consider the page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6


How many page fault, would occur for the following page replacement algorithms assuming
3 and 5 frames. i) LRU ii) Optimal
Solution:

LRU with 3 frames:


Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 4 4 4 5 5 5 1 1 1 7 7 7 2 2 2 2 2
2 2 2 2 2 2 2 6 6 6 6 3 3 3 3 3 3 3 3 3
3 3 3 3 1 1 1 2 2 2 2 2 6 6 6 1 1 1 6
No. of Page faults √ √ √ √ √ √ √ √ √ √ √ √ √ √ √

No of page faults = 15
LRU with 5 frames:
Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 3 3 3 6 6 6 6 6 6 6 6 6 6 6 6 6
4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 3 3
5 5 5 5 5 5 5 7 7 7 7 7 7 7 7
No. of Page faults √ √ √ √ √ √ √ √

No of page faults = 8
Optimal with 3 frames:
Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 6
2 2 2 2 2 2 2 2 2 2 2 2 7 7 7 2 2 2 2 2
3 3 4 4 4 5 6 6 6 6 6 6 6 6 6 1 1 1 1
No. of Page faults √ √ √ √ √ √ √ √ √ √ √

No of page faults = 11

Dept. of CSE,CEC Page 39


OPERATING SYSTEMS MODULE 4

Optimal with 5 frames:


Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
4 4 4 4 4 6 6 6 6 6 6 6 6 6 6 6 6 6
5 5 5 5 5 5 5 7 7 7 7 7 7 7 7
No. of Page faults √ √ √ √ √ √ √

No of page faults = 7

3) For the page reference, 5, 4, 3, 2, 1, 4, 3, 5, 4, 3, 2, 1, 5. calculate the page faults that occur
using FIFO and LRU for 3 and 4 page frames respectively.
Solution:
LRU with 3 frames:
Frames 5 4 3 2 1 4 3 5 4 3 2 1 5
1 5 5 5 2 2 2 3 3 3 3 3 3 5
2 4 4 4 1 1 1 5 5 5 2 2 2
3 3 3 3 4 4 4 4 4 4 1 1
No. of Page faults √ √ √ √ √ √ √ √ √ √ √

No of page faults = 11

LRU with 4 frames:


Frames 5 4 3 2 1 4 3 5 4 3 2 1 5
1 5 5 5 5 1 1 1 1 1 1 2 2 2
2 4 4 4 4 4 4 4 4 4 4 4 5
3 3 3 3 3 3 3 3 3 3 3 3
4 2 2 2 2 5 5 5 5 1 1
No. of Page faults √ √ √ √ √ √ √ √ √

No of page faults = 9

Dept. of CSE,CEC Page 40


OPERATING SYSTEMS MODULE 4

FIFO with 3 frames:


Frames 5 4 3 2 1 4 3 5 4 3 2 1 5
1 5 5 5 2 2 2 3 3 3 3 3 1 1
2 4 4 4 1 1 1 5 5 5 5 5 5
3 3 3 3 4 4 4 4 4 2 2 2
No. of Page faults √ √ √ √ √ √ √ √ √ √

No of page faults = 10

FIFO with 4 frames:


Frames 5 4 3 2 1 4 3 5 4 3 2 1 5
1 5
2
3
4
No. of Page faults √ √ √ √ √ √ √ √ √ √ √

No of page faults = 11

4) Consider the reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. Calculate the page faults using


FIFO and LRU for memory with 3 and 4 frames.
Solution:
FIFO with 3 frames:
Frames 1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 1 4 4 4 5 5 5 5 5 5
2 2 2 2 1 1 1 1 1 3 3 3
3 3 3 3 2 2 2 2 2 4 4
No. of Page faults √ √ √ √ √ √ √ √ √

No. of page faults = 9

Dept. of CSE,CEC Page 41


OPERATING SYSTEMS MODULE 4

FIFO with 4 frames:


Frames 1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 1 1 1 1 5 5 5 5 4 4
2 2 2 2 2 2 2 1 1 1 1 5
3 3 3 3 3 3 3 2 2 2 2
4 4 4 4 4 4 4 3 3 3
No. of Page faults √ √ √ √ √ √ √ √ √ √

No. of page faults = 10


LRU with 3 frames:
Frames 1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 1 4 4 4 5 5 5 3 3 3
2 2 2 2 1 1 1 1 1 1 4 4
3 3 3 3 2 2 2 2 2 2 5
No. of Page faults √ √ √ √ √ √ √ √ √

No. of page faults = 10


LRU with 4 frames:
Frames 1 2 3 4 1 2 5 1 2 3 4 5
1 1 1 1 1 1 1 1 1 1 1 1 5
2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 3 3 5 5 5 5 4 4
4 4 4 4 4 4 4 3 3 3
No. of Page faults √ √ √ √ √ √ √ √

No. of page faults = 8

Dept. of CSE,CEC Page 42


OPERATING SYSTEMS MODULE 4

ALLOCATION OF FRAMES
We must also allocate at least a minimum number of frames to processes. One reason for this is
performance. As the number of frames allocated to each process decreases, the page-fault rate
increases, slowing process execution. In addition, when a page-fault occurs before an executing
instruction is complete, the instruction must be restarted. The minimum number of frames is defined
by the computer architecture.
Explain any one frame allocation algorithms with example
ALLOCATION ALGORITHMS
1. Equal Allocation
2. Proportional Allocation
Equal Allocation:
We split m frames among n processes is to give everyone an equal share, m/n frames.
For example: if there are 93 frames and five processes, each process will get 18 frames. The three
leftover frames can be used as a free-frame buffer pool.
Proportional Allocation
We can allocate available memory to each process according to its size.
In both 1 & 2, the allocation may vary according to the multiprogramming level. If the
multiprogramming level is increased, each process will lose some frames to provide the memory
needed for the new process. Conversely, if the multiprogramming level decreases, the frames that
were allocated to the departed process can be spread over the remaining processes.
GLOBAL VERSUS LOCAL ALLOCATION
Global Replacement Local Replacement
Allows a process to a replacement frame from the Each process selects from only its own set of
set of all frames. allocated frames.
A process may happen to select only frames Number of frames allocated to a process does not
allocated to other processes, thus increasing the change.
number of frames allocated to it.
Disadvantage: Disadvantage:
A process cannot control its own page-fault rate. Might prevent a process by not making available
to it other less used pages of memory.
Advantage:
Results in greater system throughput.

Dept. of CSE,CEC Page 43


OPERATING SYSTEMS MODULE 4

THRASHING
What is thrashing? Explain thrashing concept in operating system
As we increase the degree of multiprogramming CPU utilization increases up to a certain level,
after that it drastically decreases, that is called as thrashing.
A process is thrashing if it is spending more time paging than executing.
Cause of Thrashing
• Thrashing results in severe performance problems
• The OS monitors CPU utilization. If it is low increase the degree of multiprogramming by
introducing new process to the system.
• If global replacement algorithm is used it replaces pages without regard to the process to
which they belong
• As the processes wait for the paging device, CPU utilization decreases
The thrashing phenomenon:
As processes keep faulting, they queue up for the paging device, so CPU utilization decreases
The CPU scheduler sees the decreasing CPU utilization and increases the degree of
multiprogramming as a result. The new process causes even more page-faults and a longer queue!

Thrashing

How to detect thrashing in operating system


1.If one process starts thrashing, it cannot steal frames from another process and cause the latter to
thrash as well.
2.We must provide a process with as many frames as it needs. This approach defines the
locality model of process execution. Locality Model states that As a process executes, it moves
from locality to locality. A locality is a set of pages that are actively used together.

Dept. of CSE,CEC Page 44


OPERATING SYSTEMS MODULE 4

A program may consist of several different localities, which may overlap.


Determination of number of frames allocated to a process
To know how many frame a process needs there are several techniques. The most popular technique
called Working set strategy starts by looking at how many frames the process is actually using. This
approach is the locality model of process execution
Working set model
The model uses the parameter ∆ , to define working-set window. The set of pages in the most recent
∆ page references is the working set. If the page is in active-use, it will be in working-set
If it is no longer been used, it will drop from the working set ∆ time units after its last reference.
Let us consider ∆ = 10 and at time t1 WS(t1) = {1, 2, 5, 6,7} and WS(t2) = { 3, 4}

  working-set window  a fixed number of page references


Example: 10,000 instruction
WSSi (working set of Process Pi) = total number of pages referenced in the most recent  (varies
in time)
If  is too small will not encompass entire locality. If  is too large will encompass several localities.
if  =   will encompass entire program.
D =  WSSi  total demand frames
if D > m  Thrashing ( if page demand is more than the available total frames)
Policy if D > m, then suspend one of the processes

Dept. of CSE,CEC Page 45


OPERATING SYSTEMS MODULE 4

Page Fault – Frequency


Thrashing has high page-fault rate, thus we want to control the rate. When it is too high, the process
needs more frames. If the page fault rate is too low, then the process may have too many frames.
We can establish upper and lower bounds on the page fault-rate. If the actual page-fault rate exceeds
the upper limit, we allocate the process another frame and if the page-fault rate falls below the lower
limit remove a frame from the process. Thus the page-fault rate is controlled and thrashing is
prevented.
Page-Fault Frequency Scheme
Establish “acceptable” page-fault rate. If actual rate is too low, process loses frame. If actual rate is
too high, process gains frame.

Dept. of CSE,CEC Page 46


OPERATING SYSTEMS MODULE 4

Dept. of CSE,CEC Page 47

You might also like