OS Module 4 CEC
OS Module 4 CEC
MEMORY MANAGEMENT
8.1 BACKGROUND
Main Memory
Basic Hardware structure of Main memory: Program must be brought (from disk) into main
memory and placed within a process for it to be run. Main-memory and registers are the only storage
that a CPU can access directly.
Register access in one CPU clock.
Main-memory can take many cycles.
Cache sits between main-memory and CPU registers.
Protection of memory required to ensure correct operation.
A pair of base- and limit-registers used to define the logical (virtual) address as shown in below
figures.
A base and a limit-register define a logical-address space Hardware address protection with base and limit-registers
ADDRESS BINDING
Explain the multi-step processing of a user program with a neat block diagram.
Load Time: Must generate re-locatable code if memory-location is not known at compile time.
Execution Time: Binding delayed until run-time if the process can be moved during its execution
from one memory-segment to another. So it needs hardware support for address maps (e.g. base
and limit-registers).
2 The set of all logical addresses Set of all physical addresses mapped to the
generated by CPU for a program is corresponding logical addresses is referred as
called logical address space. Physical Address space
Major part of swap-time is transfer-time; i.e. total transfer-time is directly proportional to the
amount of memory swapped.
Disadvantages: Context-switch time is fairly high. If we want to swap a process, we must be sure
that it is completely idle. Two solutions: Never swap a process with pending I/O operation. Execute
I/O operations only into OS buffers.
8.3 CONTIGUOUS MEMORY ALLOCATION
Memory is usually divided into 2 partitions:
1. One for the resident OS.
2. One for the user-processes.
Each process is contained in a single contiguous section of memory.
Memory Mapping & Protection
Memory-protection means protecting OS from user-process and protecting user-processes from one
another. Memory-protection is done using
Relocation-register: contains the value of the smallest physical-address.
Limit-register: contains the range of logical-addresses.
Each logical-address must be less than the limit-register. The MMU maps the logical-address
dynamically by adding the value in the relocation-register. This mapped-address is sent to memory
as shown in below figure. When the CPU scheduler selects a process for execution, the dispatcher
loads the relocation and limit-registers with the correct values. Because every address generated
by the CPU is checked against these registers, we can protect the OS from the running-process. The
relocation-register scheme provides an effective way to allow the OS size to change dynamically.
Transient OS code: Code that comes & goes as needed to save memory-space and overhead for
unnecessary swapping.
MEMORY ALLOCATION
Two types of memory partitioning are:
1) Fixed-sized partitioning and
2) Variable-sized partitioning
Fixed-sized Partitioning (Static partition) The memory is divided into fixed-sized partitions
before the process enters main memory. The size of each partition may or may not be same. Each
partition may contain exactly one process. The degree of multiprogramming is bound by the number
of partitions (Limitation on number of processes). When a partition is free, a process is selected
from the input queue and loaded into the free partition. When the process terminates, the partition
becomes available for another process.
Variable-sized Partitioning ( Dynamic partition): Partitions are made as the process enters into
main memory. The OS keeps a table indicating which parts of memory are available and which
parts are occupied. A hole is a block of available memory. Normally, memory contains a set of
holes of various sizes. Initially, all memory is available for user-processes and considered one large
hole. When a process arrives, the process is allocated memory from a large hole. If we find the hole,
we allocate only as much memory as is needed and keep the remaining memory available to satisfy
future requests.
Three strategies used to select a free hole from the set of available holes.
1. First Fit
2. Best Fit
3. Worst Fit
Difference between First fit and Best fit algorithms
First Fit
Allocate the first hole that is big enough to fit the requested page.
Searching can start either at the beginning of the set of holes or at the location where the
previous first-fit search ended.
Best Fit
Allocate the smallest hole that is big enough to fit the requested page
We must search the entire list, unless the list is ordered by size.
This strategy produces the smallest leftover hole. Internal fragmentation is less.
Worst Fit
Allocate the largest hole and fit the requested page.
Again, we must search the entire list, unless it is sorted by size.
This strategy produces the largest leftover hole. Internal fragmentation is more.
First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.
FRAGMAENTATION
What is fragmentation?
As processes are loaded and removed from main memory the free memory space is broken into
little pieces. After some time that processes cannot be allocated to memory because of smaller size
and memory block or partition remains unused. This problem is called fragmentation.
Two types of memory fragmentation:
1) Internal fragmentation and
2) External fragmentation
Internal Fragmentation
The general approach is to break the physical-memory into fixed-sized blocks and allocate memory
in units based on block size or process size as shown below:
The allocated(partitioned) memory to a process may be slightly larger than the requested-memory.
The difference between requested-memory and allocated-memory is called internal fragmentation
i.e. Unused memory that is internal to a partition.
External Fragmentation
External fragmentation occurs when there is enough total memory-space to satisfy a request but the
available-spaces are not in contiguous. (i.e. storage is fragmented into a large number of small
holes). For example in below figure we have 3 free holes of size say 9K, 5K and 6K. Let us consider
the next arriving process whose size = 20K. Memory partition cannot be allocated for this process,
even though the total available size of memory is more than the size of process. Since the available-
spaces are not in contiguous.
Both the first-fit and best-fit strategies for memory-allocation suffer from external fragmentation.
Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N blocks will be
lost to fragmentation. This property is known as the 50-percent rule.
Given the memory partitions of 200K, 700K, 500K, 300K, 100K, 400K. Apply first fit and best
fit to place 315K, 427K, 250K, 550K processes.
Solution: First Fit
Best Fit:
Worst Fit
2. Given the 5 memory partitions 100KB, 500KB, 200KB, 300KB and 600KB, how each of the
First fit, best fit and worst fit algorithms place processes of 212KB, 417KB, 112KB and 426KB
size. Which algorithm makes an efficient use of the memory?
8.4 PAGING
What is paging?
Paging is a memory management scheme that permits the physical address space of a process to be
noncontiguous.
It avoids external fragmentation. It also solves the considerable problem of fitting memory chunks
of varying sizes onto the backing store.
Paging Hardware
Explain the concept of simple paging hardware.
Divide physical (Main) memory into fixed-sized blocks called frames and logical (secondary)
memory is broken into fixed size blocks called pages.
• When a process is to be executed, its pages are loaded into any available main memory-
frames from the backing-store. The backing-store is divided into fixed-sized blocks that are
of the same size as the memory-frames.
• Still have Internal fragmentation
The page-table contains the base-address (frame number on which page is loaded) of each page in
physical-memory. Address generated by CPU is the logical address divided into 2 parts:
1. Page-number(p) is used as an index to the page-table and
2. Offset(d) is combined with the base-address (frame number) to define the physical-address.
This physical-address is sent to the memory-unit to fetch the required data/instruction.
With supporting paging hardware, explain in detail concept of paging with an example for
32-byte memory with 4 byte pages with a process being 16 bytes. How many bits are reserved
for page number and page offset in the logical address. Suppose the logical address is 5,
calculate the corresponding physical address, after populating memory and page table.
For explanation refer above paging hardware topic
Example:
Given that page size = 4 byte. Therefore frame size = 4 byte.
Process size = 16 byte
Number of pages = 16/4 = 4 Pages
Main memory size = 32 byte
Number of frames in main memory = 32/4 =8
When CPU generates logical address it contains two parts:
Page Number and Offset field.
Number of pages = 4, so we need 2 bits to represent 4 different pages; 0,1,2, 3
Here the number of bits required for Offset field = number of bits required for page size = 2 bits
So the number of bits reserved for page no. = 2 bits and number of bits for offset = 2 bits. Therefore
CPU generates logical address of size 4 bits.
Suppose CPU generates a logical address = 5
Logical address (0101)
Page number (2) Offset (2)
01 01
The physical address corresponding to this logical address (5) is obtained by referring the following
memory and page table information.
In the above logical address page no = 01 = Page 1
Now the MMU refers the Page Table and finds the frame number corresponding to page 1. ie:
frame 6.
Since main memory contains 8 frames, we need 3 bits to represent 8 different frames, f0, f1….f7,
in physical address. Offset field = 2 bits
Physical address
Frame number (3) Offset (2)
110 01
That is the logical address 5 (page 1, offset 1) maps to physical address = (6 x 4) +1 = 25
Therefore the physical address corresponding to the logical address 5 = 11001 = 25
1. Problem with simple paging is that extra memory references to access the page table is
required to get the frame number corresponding to the page number.
2. Thus two memory accesses are needed to access a byte in main memory, one for the page
table entry and one for the byte.
3. In simple paging scheme memory access is slowed by a factor of 2.
A TLB consists of two parts: a) Page number) b) Frame number. TLB contains only a few of the
page-table entries. When a logical-address is generated by the CPU, its page-number is presented
(entered) to the TLB.
If the page-number is found(TLB hit), its frame-number is immediately available and used
to access memory.
If page-number is not found in TLB (TLB miss), a memory-reference to page table must
be made. The obtained frame-number can be used to access memory. In addition, we add
this page-number and frame-number to the TLB, so that they will be found quickly on the
next reference.
If the TLB is already full of entries, the OS must select one for replacement. Percentage of times
that a particular page-number is found in the TLB is called hit ratio.
Advantage: Search operation is fast.
Disadvantage: Hardware is expensive.
In the paging scheme with TLB, it takes 20ns to search the TLB and 100ns to access memory.
Find the effective access time and percentage slowdown in memory access time if
i. Hit ratio is 80%
Solution:
In the paging scheme with TLB, if we find the page in TLB, it takes 20ns to search the TLB and
100ns to access memory, and then a mapped memory access takes 120ns when the page number is
in the TLB.
If we fail to find the page number in TLB(20ns), then we must first access memory for the page
table to get the page number and corresponding frame number(100ns). Then access memory for the
desired byte in memory (100ns).
Thus total 220ns time required to access the byte if the page number is not in TLB. (TLB search
time in main memory page table access + In main memory desired byte access)
i) Effective memory Access time = 0.80 x 120 + 0.20 x 220 = 140ns
Percentage slow down in memory access time = (140-100) = 40%
ii) Effective memory Access time = 0.98 x 120 + 0.02 x 220 =122ns
Percentage slow down in memory access time = (122-100) = 22%
MEMORY PROTECTION
Memory-protection is achieved by protection-bits for each frame. The protection-bits are kept in
the page-table. One protection-bit can define a page to be read-write or read-only. Every reference
to memory goes through the page-table to find the correct frame-number. Firstly, the physical-
address is computed. At the same time, the protection-bit is checked to verify that no writes are
being made to a read-only page. An attempt to write to a read-only page causes a hardware-trap to
the OS (or memory-protection violation).
This is also known as forward-mapped page-table because address translation works from the
outer page-table inwards.
Consider the system with a 32-bit logical-address space and a page-size of 4 KB. A logical-address
is divided into 20-bit page-number and 12-bit page-offset. Since the page-table is paged, the page-
number is further divided into 10-bit page-number and 10-bit page-offset.Thus, a logical-address is
as follows:
Hashed page-table
The virtual page-number is hashed into the hash-table. The virtual page-number is compared with
the first element in the linked-list. If there is a match, the corresponding page-frame (field 2) is used
to form the desired physical-address. If there is no match, subsequent entries in the linked-list are
searched for a matching virtual page-number.
INVERTED PAGE TABLES
It has one entry for each real page of memory. Each entry consists of virtual-address of the page
stored in that real memory-location and information about the process that owns the page.
Each virtual-address consists of a triplet:
<process-id, page-number, offset>.
Each inverted page-table entry is a pair <process-id, page-number>
The algorithm works as follows:
8.6 SEGMENTATION
Explain segmentation with an example.
Segmentation is a memory-management scheme that supports user-view of memory. A logical-
address space is a collection of segments. Each segments of various size, which has a name and a
length. The addresses specify both segment-name and offset within the segment.
Normally, the user-program is compiled, and the compiler automatically constructs segments
reflecting the input program.
For example: The code, Global variables, The heap, from which memory is allocated. The stacks
used by each thread.
Hardware Support
Segment-table maps 2 dimensional user-defined addresses into one-dimensional physical-
addresses. In the segment-table, each entry has following 2 fields:
Segment-base contains starting physical-address where the segment resides in memory.
Segment-limit specifies the length of the segment
Segmentation hardware
A logical-address consists of 2 parts:
Segment-number(s) is used as an index to the segment-table. Offset(d) must be between 0 and the
segment-limit.
If offset is not between 0 & segment-limit, then we trap to the OS (logical-addressing attempt
beyond end of segment).
If offset is legal, then it is added to the segment-base to produce the physical-memory address
What are the physical addresses for the following logical addresses:
i. (0, 430) ii. (1, 10) iii. (2, 500) iv) 3, 400
Solution
i. (0, 430)
Segment = 0 and limit = 430
Therefore physical address = 219 + 430 = 649
ii. (1, 10)
Segment = 1 and limit = 10
Therefore physical address = 2300 + 10 = 2310
iii. (2, 500)
Segment = 2 and limit = 500
Therefore physical address = 1327 + 500 = 1827
iv. (3, 400)
Segment = 3 and limit value = 400 which is > limit value 96. So a reference to byte 400
of segment 3 would result in a trap to the OS, as this segment is only 96 bytes long,
thereby generating segment Fault error.
Consider the following segment table:
What are the physical addresses for the following logical addresses:
i. (0, 9, 9) ii. (2, 78) iii. (1, 265) iv) (3, 222) v) (0, 111)
Solution
i. (0, 9, 9): Logical address is incorrect; since it contains only two parts as segment and
limit.
ii. (2, 78)
Segment = 2 and limit = 78
Therefore physical address = 111 + 78 = 189
Dept. of CSE,CEC Page 22
OPERATING SYSTEMS MODULE 4
DEMAND PAGING
********What is on demand paging?Explain demand paging in detail.
The process of loading the page into main memory on demand (whenever page fault occurs) is
known as demand paging.
A demand-paging system is similar to a paging-system with swapping Processes reside in
secondary-memory (usually a disk).
It is a method of virtual memory management, where it follows that a process begins
execution with none of its pages in main memory, and many page faults will occur until
most of a process's working set of pages are located in main memory.
When we want to execute a process, we swap it into memory. Instead of swapping in a whole
process, lazy swapper brings only those necessary pages into main memory
The valid-invalid bit scheme can be used to distinguish between pages that are in memory and pages
that are on the disk.
If the bit is set to valid, the associated page is both legal and in memory.
If the bit is set to invalid, the page either is not valid (i.e. not in the logical-address space of
the process) or is valid but is currently on the disk
The hardware to support demand paging is i) Page table and ii) Secondary memory.
Page table mark an entry invalid through a valid-invalid bit. Secondary memory holds the pages
that are not in main memory.
1. Check an internal-table to determine whether the reference was a valid or an invalid memory
access.
2. If the reference is invalid, we terminate the process. If reference is valid, but we have not
yet brought in that page, we now page it in.
3. Find a free-frame (by taking one from the free-frame list, for example).
4. Read the desired page into the newly allocated frame.
5. Modify the internal-table and the page-table to indicate that the page is now in memory.
6. Restart the instruction that was interrupted by the trap.
PURE DEMAND PAGING
What is pure demand paging?
It is technique in which, never bring pages into main memory until it is required.
• In the extreme case, we can start executing a process with no pages in memory.
• When the operating system sets the instruction pointer to the first instruction of the process,
which is on a non-memory-resident page, the process immediately faults for the page.
• After this page is brought into memory, the process continues to execute, faulting as
necessary until every page that it needs is in memory. At that point it can execute with no
more faults.
• This scheme is pure demand paging such that never bring a page into memory until it is
required.
Some programs may access several new pages of memory with each instruction, causing multiple
page-faults and poor performance.
Programs tend to have locality of reference, so this results in reasonable performance from demand
paging.
4. Check that the page-reference was legal and determine the location of the page on the disk.
5. Issue a read from the disk to a free frame:
a). Wait in a queue for this device until the read request is serviced.
b). Wait for the device seek time.
c). Begin the transfer of the page to a free frame.
6. While waiting, allocate the CPU to some other user.
7. Receive an interrupt from the disk I/O subsystem (I/O completed).
8. Save the registers and process-state for the other user (if step 6 is executed).
9. Determine that the interrupt was from the disk.
10. Correct the page-table and other tables to show that the desired page is now in memory.
11. Wait for the CPU to be allocated to this process again.
12. Restore the user-registers, process-state, and new page-table, and then resume the
interrupted instruction.
COPY-ON-WRITE
***Explain copy-on write process in virtual memory.
Copy On Write technique allows the parent and child processes initially to share the same pages in
memory. If either process writes to a shared-page, a copy of the shared-page is created. Copy on
write allows more efficient process creation as only modified pages are copied.
For example:
Assume that the child process attempts to modify a page containing portions of the stack, with the
pages set to be copy-on-write. OS will then create a copy of this page, mapping it to the address
space of the child process. Child process will then modify its copied page & not the page belonging
to the parent process.
PAGE REPLACEMENT
Why do we need for Page Replacement in OS?
If we increase our degree of multiprogramming, we are over-allocating memory. While a user-
process is executing, a page-fault occurs. Then the OS determines where the desired page is residing
on the disk but then finds that there are no free frames on the free-frame list.
Then the operating System could:
Terminate the user-process (Not a good idea).
Swap out a process, freeing all its frames, and reducing the level of multiprogramming.
Perform page replacement.
Page replacement
What are the problems that occur in page replacement concept? How it can be overcome.
Problem: If no frames are free, 2 page transfers (1 out & 1 in) are required. This situation doubles
the page-fault service-time and increases the EAT accordingly.
Solution: Use a modify-bit (or dirty bit).
Each page or frame has a modify-bit associated with the hardware. The modify-bit for a page is set
by the hardware whenever any word is written into the page (indicating that the page has been
modified).
Working: When we select a page for replacement, we examine it‟s modify-bit.
If the modify-bit =1, the page has been modified. So, we must write the page to the disk.
If the modify bit = 0; the page has not been modified. So we need not write the page to the disk, it
is already there.
Dept. of CSE,CEC Page 30
OPERATING SYSTEMS MODULE 4
The first three references (7, 0, 1) cause page-faults and are brought into these empty frames.
The next reference (2) replaces page 7, because page 7 was brought in first.
Since 0 is the next reference and 0 is already in memory, we have no fault for this reference.
The first reference to 3 results in replacement of page 0, since it is now first in line.
This process continues till the end of string.
There are fifteen page faults altogether.
Advantage:
1) Does not suffer from Belady's anomaly.
Disadvantage:
1) Few computer systems provide sufficient hardware support for true LRU page replacement.
Both LRU & OPT are called stack algorithms.
SECOND-CHANCE ALGORITHM
The number of bits of history included in the shift register can be varied to make the updating as
fast as possible. In the extreme case, the number can be reduced to zero, leaving only the reference
bit itself. This algorithm is called the second-chance algorithm. This is the variant of basic FIFO
replacement algorithm.
Procedure:
Initially all reference bits are set to 0 and any page hit results in corresponding reference bit
to set to 1
When a page has been selected for replacement, we inspect its reference bit
If reference bit = 0, we proceed to replace this page.
If reference bit = 1, we give the page a second chance & move on to select next FIFO page.
When a page gets a second chance, its reference bit is cleared, and its arrival time is reset.
A circular queue can be used to implement the second-chance algorithm as shown in above figure.
A pointer (that is, a hand on the clock) indicates which page is to be replaced next.
When a frame is needed, the pointer advances until it finds a page with a 0 reference bit.
As it advances, it clears the reference bits.
Once a victim page is found, the page is replaced, and the new page is inserted in the circular
queue in that position.
Enhanced Second-Chance Algorithm
We can enhance the second-chance algorithm by considering Reference bit and modify-bit.
We have following 4 possible classes:
1) (0, 0) neither recently used nor modified -best page to replace.
2) (0, 1) not recently used hut modified-not quite as good, because the page will need to be
written out before replacement.
3) (1, 0) recently used but clean-probably will be used again soon.
4) (1, 1) recently used and modified -probably will be used again soon, and the page will be
need to be written out to disk before it can be replaced.
Each page is in one of these four classes. When page replacement is called for, we examine the class
to which that page belongs. We replace the first page encountered in the lowest nonempty class.
Counting-Based Page Replacement
1.LFU page-replacement algorithm
Working principle: The page with the smallest count will be replaced. The reason for this selection
is that an actively used page should have a large reference count.
Problem:
When a page is used heavily during initial phase of a process but then is never used again. Since it
was used heavily, it has a large count and remains in memory even though it is no longer needed.
Solution:
Shift the counts right by 1 bit at regular intervals, forming an exponentially decaying average usage
count.
MFU (Most Frequently Used) page-replacement algorithm
Working principle: The page with the smallest count was probably just brought in and has yet to
be used.
EXERCISE PROBLEMS
No of page faults = 12
FIFO with 3 frames:
Frames 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
1 7 7 7 2 2 2 2 4 4 4 0 0 0 0 0 0 0 7 7 7
2 0 0 0 0 3 3 3 2 2 2 2 2 1 1 1 1 1 0 0
3 1 1 1 1 0 0 0 3 3 3 3 3 2 2 2 2 2 1
No. of Page faults √ √ √ √ √ √ √ √ √ √ √ √ √ √ √
No of page faults = 15
Optimal with 3 frames:
Frames 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
1 7 7 7 2 2 2 2 2 2 2 2 2 2 2 2 2 2 7 7 7
2 0 0 0 0 0 0 4 4 4 0 0 0 0 0 0 0 0 0 0
3 1 1 1 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1
No. of Page faults √ √ √ √ √ √ √ √ √
No of page faults = 9
Conclusion: The optimal page replacement algorithm is most efficient among three algorithms, as
it has lowest page faults i.e. 9.
No of page faults = 15
LRU with 5 frames:
Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 3 3 3 6 6 6 6 6 6 6 6 6 6 6 6 6
4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 3 3
5 5 5 5 5 5 5 7 7 7 7 7 7 7 7
No. of Page faults √ √ √ √ √ √ √ √
No of page faults = 8
Optimal with 3 frames:
Frames 1 2 3 4 2 1 5 6 2 1 2 3 7 6 3 2 1 2 3 6
1 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 6
2 2 2 2 2 2 2 2 2 2 2 2 7 7 7 2 2 2 2 2
3 3 4 4 4 5 6 6 6 6 6 6 6 6 6 1 1 1 1
No. of Page faults √ √ √ √ √ √ √ √ √ √ √
No of page faults = 11
No of page faults = 7
3) For the page reference, 5, 4, 3, 2, 1, 4, 3, 5, 4, 3, 2, 1, 5. calculate the page faults that occur
using FIFO and LRU for 3 and 4 page frames respectively.
Solution:
LRU with 3 frames:
Frames 5 4 3 2 1 4 3 5 4 3 2 1 5
1 5 5 5 2 2 2 3 3 3 3 3 3 5
2 4 4 4 1 1 1 5 5 5 2 2 2
3 3 3 3 4 4 4 4 4 4 1 1
No. of Page faults √ √ √ √ √ √ √ √ √ √ √
No of page faults = 11
No of page faults = 9
No of page faults = 10
No of page faults = 11
ALLOCATION OF FRAMES
We must also allocate at least a minimum number of frames to processes. One reason for this is
performance. As the number of frames allocated to each process decreases, the page-fault rate
increases, slowing process execution. In addition, when a page-fault occurs before an executing
instruction is complete, the instruction must be restarted. The minimum number of frames is defined
by the computer architecture.
Explain any one frame allocation algorithms with example
ALLOCATION ALGORITHMS
1. Equal Allocation
2. Proportional Allocation
Equal Allocation:
We split m frames among n processes is to give everyone an equal share, m/n frames.
For example: if there are 93 frames and five processes, each process will get 18 frames. The three
leftover frames can be used as a free-frame buffer pool.
Proportional Allocation
We can allocate available memory to each process according to its size.
In both 1 & 2, the allocation may vary according to the multiprogramming level. If the
multiprogramming level is increased, each process will lose some frames to provide the memory
needed for the new process. Conversely, if the multiprogramming level decreases, the frames that
were allocated to the departed process can be spread over the remaining processes.
GLOBAL VERSUS LOCAL ALLOCATION
Global Replacement Local Replacement
Allows a process to a replacement frame from the Each process selects from only its own set of
set of all frames. allocated frames.
A process may happen to select only frames Number of frames allocated to a process does not
allocated to other processes, thus increasing the change.
number of frames allocated to it.
Disadvantage: Disadvantage:
A process cannot control its own page-fault rate. Might prevent a process by not making available
to it other less used pages of memory.
Advantage:
Results in greater system throughput.
THRASHING
What is thrashing? Explain thrashing concept in operating system
As we increase the degree of multiprogramming CPU utilization increases up to a certain level,
after that it drastically decreases, that is called as thrashing.
A process is thrashing if it is spending more time paging than executing.
Cause of Thrashing
• Thrashing results in severe performance problems
• The OS monitors CPU utilization. If it is low increase the degree of multiprogramming by
introducing new process to the system.
• If global replacement algorithm is used it replaces pages without regard to the process to
which they belong
• As the processes wait for the paging device, CPU utilization decreases
The thrashing phenomenon:
As processes keep faulting, they queue up for the paging device, so CPU utilization decreases
The CPU scheduler sees the decreasing CPU utilization and increases the degree of
multiprogramming as a result. The new process causes even more page-faults and a longer queue!
Thrashing