0% found this document useful (0 votes)
20 views

Unit-3 (OS)

Uploaded by

ammujasty
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Unit-3 (OS)

Uploaded by

ammujasty
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Unit-III

Chapter 1

Memory Management Strategies


Introduction
In a multiprogramming computer, the operating system resides in a part of
memory and the rest is used by multiple processes. The task of subdividing the
memory among different processes is called memory management. Memory
management is a method in the operating system to manage operations between
main memory and disk during process execution. The main aim of memory
management is to achieve efficient utilization of memory.
The reasons for the requirement of Memory Management are:
 Allocate and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.

Swapping
Swapping is a process of swap a process temporarily into a secondary memory
from the main memory, which is fast as compared to secondary memory.
A swapping allows more processes to be run and can be fit into memory at one time.
The main part of swapping is transferred time and the total time directly
proportional to the amount of memory swapped. Swapping is also known as roll-out,
roll in, because if a higher priority process arrives and wants service, the memory
manager can swap out the lower priority process and then load and execute the
higher priority process. After finishing higher priority work, the lower priority
process swapped back in memory and continued to the execution process.

1|Page
Contiguous Memory Allocation

The main memory should oblige both the operating system and the different
client processes. Therefore, the allocation of memory becomes an important task in
the operating system. The memory is usually divided into two partitions: one for
the resident operating system and one for the user processes. We normally need
several user processes to reside in memory simultaneously. Therefore, we need to
consider how to allocate available memory to the processes that are in the input
queue waiting to be brought into memory. In adjacent memory allotment, each
process is contained in a single contiguous segment of memory.

To gain proper memory utilization, memory allocation must be allocated


efficient manner. One of the simplest methods for allocating memory is to divide
memory into several fixed-sized partitions and each partition contains exactly one
process. Thus, the degree of multiprogramming is obtained by the number of
partitions.
Multiple partition allocation: In this method, a process is selected from the input
queue and loaded into the free partition. When the process terminates, the partition
becomes available for other processes.
2|Page
Fixed partition allocation: In this method, the operating system maintains a table
that indicates which parts of memory are available and which are occupied by
processes. Initially, all memory is available for user processes and is considered one
large block of available memory. This available memory is known as “Hole”. When
the process arrives and needs memory, we search for a hole that is large enough to
store this process. If the requirement fulfills then we allocate memory to process,
otherwise keeping the rest available to satisfy future requests. While allocating a
memory sometimes dynamic storage allocation problems occur, which concerns how
to satisfy a request of size n from a list of free holes.

There are some solutions to this problem:


First fit:-
In the first fit, the first available free hole fulfills the requirement of the process
allocated.

Here, in this diagram 40 KB memory block is the first available free hole that
can store process A (size of 25 KB), because the first two blocks did not have
sufficient memory space.

Best fit:-
In the best fit, allocate the smallest hole that is big enough to process requirements.
For this, we search the entire list, unless the list is ordered by size.

3|Page
Here in this example, first, we traverse the complete list and find the last hole
25KB is the best suitable hole for Process A(size 25KB).
In this method memory utilization is maximum as compared to other memory
allocation techniques.
Worst fit:-In the worst fit, allocate the largest available hole to process. This
method produces the largest leftover hole.

Here in this example, Process A (Size 25 KB) is allocated to the largest


available memory block which is 60KB. Inefficient memory utilization is a major
issue in the worst fit.
A Fragmentation is defined as when the process is loaded and removed after
execution from memory, it creates a small free hole. These holes can not be
assigned to new processes because holes are not combined or do not fulfill the
memory requirement of the process. To achieve a degree of multiprogramming, we
must reduce the waste of memory or fragmentation problem. In operating system
two types of fragmentation:

4|Page
Internal fragmentation:

Internal fragmentation occurs when memory blocks are allocated to the


process more than their requested size. Due to this some unused space is leftover
and creates an internal fragmentation problem.
Example: Suppose there is a fixed partitioning is used for memory allocation
and the different size of block 3MB, 6MB, and 7MB space in memory. Now a new
process p4 of size 2MB comes and demand for the block of memory. It gets a
memory block of 3MB but 1MB block memory is a waste, and it can not be allocated
to other processes too. This is called internal fragmentation.
External fragmentation:

In external fragmentation, we have a free memory block, but we can not


assign it to process because blocks are not contiguous.
Example: Suppose (consider above example) three process p1, p2, p3 comes
with size 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size
3MB, 6MB, and 7MB allocated respectively. After allocating process p1 process and
p2 process left 1MB and 2MB. Suppose a new process p4 comes and demands a 3MB
block of memory, which is available, but we can not assign it because free memory
space is not contiguous. This is called external fragmentation.
Both the first fit and best-fit systems for memory allocation affected by
external fragmentation. To overcome the external fragmentation problem
Compaction is used. In the compaction technique, all free memory space combines
and makes one large block. So, this space can be used by other processes
effectively.
Another possible solution to the external fragmentation is to allow the logical
address space of the processes to be noncontiguous, thus permit a process to be
allocated physical memory wherever the latter is available.

Paging:

Paging is a memory management scheme that eliminates the need for


contiguous allocation of physical memory. This scheme permits the physical address
space of a process to be non-contiguous.
 Logical Address or Virtual Address (represented in bits): An address
generated by the CPU
 Logical Address Space or Virtual Address Space (represented in words or
bytes): The set of all logical addresses generated by a program
 Physical Address (represented in bits): An address actually available on a
memory unit
 Physical Address Space (represented in words or bytes): The set of all
physical addresses corresponding to the logical addresses

5|Page
Example:
 If Logical Address = 31 bits, then Logical Address Space = 2 31 words = 2 G
words (1 G = 230)
 If Logical Address Space = 128 M words = 2 7 * 220 words, then Logical
Address = log2 227 = 27 bits
 If Physical Address = 22 bits, then Physical Address Space = 2 22 words = 4
M words (1 M = 220)
 If Physical Address Space = 16 M words = 2 4 * 220 words, then Physical
Address = log2 224 = 24 bits

The mapping from virtual to physical address is done by the memory


management unit (MMU) which is a hardware device and this mapping is known as
the paging technique.
 The Physical Address Space is conceptually divided into several fixed-size
blocks, called frames.
 The Logical Address Space is also split into fixed-size blocks, called pages.
 Page Size = Frame Size

Let us consider an example:


 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

The address generated by the CPU is divided into


 Page number(p): Number of bits required to represent the pages in
Logical Address Space or Page number

6|Page
 Page offset(d): Number of bits required to represent a particular word
in a page or page size of Logical Address Space or word number of a
page or page offset.

Physical Address is divided into


 Frame number(f): Number of bits required to represent the frame of
Physical Address Space or Frame number frame
 Frame offset(d): Number of bits required to represent a particular word
in a frame or frame size of Physical Address Space or word number of a
frame or frame offset.

The hardware implementation of the page table can be done by using


dedicated registers. But the usage of register for the page table is satisfactory only
if the page table is small. If the page table contains a large number of entries then
we can use TLB(translation Look-aside buffer), a special, small, fast look-up
hardware cache.
 The TLB is an associative, high-speed memory.
 Each entry in TLB consists of two parts: a tag and a value.
 When this memory is used, then an item is compared with all tags
simultaneously. If the item is found, then the corresponding value is
returned.

Main memory access time = m


If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)
7|Page
Segmentation:

A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the same sizes are called segments. Segmentation
gives user’s view of the process which paging does not give. Here the user’s view is
mapped to physical memory.

There are 2 types of segmentation:


1. Virtual memory segmentation
Each process is divided into a number of segments, not all of which are
resident at any one point in time.
2. Simple segmentation
Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously.

There is no simple relationship between logical addresses and physical


addresses in segmentation. A table stores the information about all such segments
and is called Segment Table.
Segment Table – It maps two-dimensional Logical address into one-dimensional
Physical address. It’s each table entry has:

 Base Address: It contains the starting physical address where the


segments reside in memory.
 Limit: It specifies the length of the segment.

8|Page
Translation of Two dimensional Logical Address to one dimensional Physical Address.

9|Page
Address generated by the CPU is divided into:
 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the size of the
segment.

Advantages of Segmentation
 There is no internal fragmentation.
 Segment Table consumes less space in comparison to Page table in paging.

Disadvantage of Segmentation

 As processes are loaded and removed from the memory, the free memory
space is broken into little pieces, causing External fragmentation.

10 | P a g e
Chapter- 2

Virtual Memory

Introduction:

Virtual Memory is a storage scheme that provides user an illusion of having a


very big main memory. This is done by treating a part of secondary memory as the
main memory.

In this scheme, User can load the bigger size processes than the available main
memory by having the illusion that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads
the different parts of more than one process in the main memory. By doing this, the
degree of multiprogramming will be increased and therefore, the CPU utilization will
also be increased.

In modern world, virtual memory has become quite common these days. In this
scheme, whenever some pages needs to be loaded in the main memory for the
execution and the memory is not available for those many pages, then in that case,
instead of stopping the pages from entering in the main memory, the OS search for
the RAM area that are least used in the recent times or that are not referenced and
copy that into the secondary memory to make the space for the new pages in the
main memory.

Since all this procedure happens automatically, therefore it makes the computer
feel like it is having the unlimited RAM.

Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is
1 KB. The main memory contains 8 frame of 1 KB each. The OS resides in the first
two partitions. In the third partition, 1 st page of P1 is stored and the other frames are
also shown as filled with the different pages of processes in the main memory.

The page tables of both the pages are 1 KB size each and therefore they can be fit in
one frame each. The page tables of both the processes contain various information
that is also shown in the image.

The CPU contains a register which contains the base address of page table that
is 5 in the case of P1 and 7 in the case of P2. This page table base address will be
added to the page number of the Logical address when it comes to accessing the
actual corresponding entry.

11 | P a g e
Advantages of Virtual Memory

1. The degree of Multiprogramming will be increased.


2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.

Disadvantages of Virtual Memory

1. The system becomes slower since swapping takes time.


2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.

Demand Paging

Every process in the virtual memory contains lots of pages and in some cases;
it might not be efficient to swap all the pages for the process at once. Because it
might be possible that the program may need only a certain page for the application
to run. Let us take an example here, suppose there is a 500 MB application and it
may need as little as 100MB pages to be swapped, so in this case, there is no need to
swap all pages at once.

The demand paging system is somehow similar to the paging system with
swapping where processes mainly reside in the main memory (usually in the hard
disk). Thus demand paging is the process that solves the above problem only by
swapping the pages on Demand. This is also known as lazy swapper ( It never swaps
the page into the memory unless it is needed).

12 | P a g e
Swapper that deals with the individual pages of a process are referred to
as Pager.

Demand Paging is a technique in which a page is usually brought into the main
memory only when it is needed or demanded by the CPU. Initially, only those pages
are loaded that are required by the process immediately. Those pages that are never
accessed are thus never loaded into the physical memory.

Figure: Transfer of a Paged Memory to the contiguous disk space.

Whenever a page is needed? make a reference to it;

 If the reference is invalid then abort it.


 If the page is Not-in-memory then bring it to memory.

Valid-Invalid Bit

Some form of hardware support is used to distinguish between the pages that are
in the memory and the pages that are on the disk. Thus for this purpose Valid-Invalid
scheme is used:
 With each page table entry, a valid-invalid bit is associated
(where 1 indicates in the memory and 0 indicates not in the memory)
 Initially, the valid-invalid bit is set to 0 for all table entries.
1. If the bit is set to "valid", then the associated page is both legal and is in
memory.

13 | P a g e
2. If the bit is set to "invalid" then it indicates that the page is either not valid or
the page is valid but is currently not on the disk.
 For the pages that are brought into the memory, the page table is set as usual.
 But for the pages that are not currently in the memory, the page table is
either simply marked as invalid or it contains the address of the page on the
disk.
During the translation of address, if the valid-invalid bit in the page table entry is 0
then it leads to page fault.

The above figure is to indicates the page table when some pages are not in the
main memory.

Working of Demand Paging

The components that are involved in the Demand paging process are as follows:

 Main Memory
 CPU
 Secondary Memory
 Interrupt
 Physical Address space
 Logical Address space
 Operating System
 Page Table
1. If a page is not available in the main memory in its active state; then a request
may be made to the CPU for that page. Thus for this purpose, it has to generate
an interrupt.
2. After that, the Operating system moves the process to the blocked state as an
interrupt has occurred.
14 | P a g e
3. Then after this, the Operating system searches the given page in the Logical
address space.
4. And finally with the help of the page replacement algorithms, replacements are
made in the physical address space. Page tables are updated simultaneously.
5. After that, the CPU is informed about that update and then asked to go ahead
with the execution and the process gets back into its ready state.
When the process requires any of the pages that are not loaded into the memory,
a page fault trap is triggered and the following steps are followed,

1. The memory address which is requested by the process is first checked, to


verify the request made by the process.
2. If it is found to be invalid, the process is terminated.
3. In case the request by the process is valid, a free frame is located, possibly
from a free-frame list, where the required page will be moved.
4. A new operation is scheduled to move the necessary page from the disk to the
specified memory location. (This will usually block the process on an I/O wait,
allowing some other process to use the CPU in the meantime.)
5. When the I/O operation is complete, the process's page table is updated with
the new frame number, and the invalid bit is changed to valid.
6. The instruction that caused the page fault must now be restarted from the
beginning.

Advantages of Demand Paging

The benefits of using the Demand Paging technique are:

 With the help of Demand Paging, memory is utilized efficiently.


 Demand paging avoids External Fragmentation.
 Less Input/output is needed for Demand Paging.
 This process is not constrained by the size of physical memory.
 With Demand Paging it becomes easier to share the pages.
 With this technique, portions of the process that are never called are never
loaded.
 No compaction is required in demand Paging.

Disadvantages /Drawbacks of Demand Paging

 There is an increase in overheads due to interrupts and page tables.


 Memory access time in demand paging is longer.

Pure Demand Paging

In some cases when initially no pages are loaded into the memory, pages in such
cases are only loaded when are demanded by the process by generating page faults.
It is then referred to as Pure Demand Paging.

15 | P a g e
 In the case of pure demand paging, there is not even a single page that is
loaded into the memory initially. Thus pure demand paging causes the page
fault.
 When the execution of the process starts with no pages in the memory, then
the operating system sets the instruction pointer to the first instruction of the
process and that is on a non-memory resident page and then in this case the
process immediately faults for the page.
 After that when this page is brought into the memory then the process
continues its execution, page fault is necessary until every page that it needs is
in the memory.
 And at this point, it can execute with no more faults.
 This scheme is referred to as Pure Demand Paging: means never bring a page
into the memory until it is required.

Copy on Write

Copy on Write or simply COW is a resource management technique. One of its


main use is in the implementation of the fork system call in which it shares the virtual
memory (pages) of the OS.

 In UNIX like OS, fork() system call creates a duplicate process of the parent
process which is called as the child process.

 The idea behind a copy-on-write is that when a parent process creates a child
process then both of these processes initially will share the same pages in
memory and these shared pages will be marked as copy-on-write which means
that if any of these processes will try to modify the shared pages then only a
copy of these pages will be created and the modifications will be done on the
copy of pages by that process and thus not affecting the other process.

 Suppose, there is a process P that creates a new process Q and then process P
modifies page 3.

The below figures shows what happens before and after process P modifies page 3.

16 | P a g e
Page Replacement

Page replacement is needed in the operating systems that use virtual memory
using Demand Paging. As we know that in Demand paging, only a set of pages of a
process is loaded into the memory. This is done so that we can have more processes
in the memory at the same time.

When a page that is residing in virtual memory is requested by a process for its
execution, the Operating System needs to decide which page will be replaced by this
requested page. This process is known as page replacement and is a vital component
in virtual memory management.

Need of Page Replacement Algorithms

A Page Fault occurs when a program running in CPU tries to access a page that
is in the address space of that program, but the requested page is currently not
loaded into the main physical memory, the RAM of the system.

17 | P a g e
Since the actual RAM is much less than the virtual memory the page faults
occur. So whenever a page fault occurs, the Operating system has to replace an
existing page in RAM with the newly requested page. In this scenario, page
replacement algorithms help the Operating System in deciding which page to replace.
The primary objective of all the page replacement algorithms is to minimize the
number of page faults.

Page Replacement Algorithms

1. First In First Out (FIFO)

FIFO algorithm is the simplest of all the page replacement algorithms. In this,
we maintain a queue of all the pages that are in the memory currently. The oldest
page in the memory is at the front-end of the queue and the most recent page is at
the back or rear-end of the queue.

Whenever a page fault occurs, the operating system looks at the front-end of
the queue to know the page to be replaced by the newly requested page. It also adds
this newly requested page at the rear-end and removes the oldest page from the
front-end of the queue.

Example: Consider the page reference string as 3, 1, 2, 1, 6, 5, 1, 3 with 3-page


frames. Let’s try to find the number of page faults:

 Initially, all of the slots are empty so page faults occur at 3,1,2.

Page faults = 3

 When page 1 comes, it is in the memory so no page fault occurs.

18 | P a g e
Page faults = 3

 When page 6 comes, it is not present and a page fault occurs. Since there are
no empty slots, we remove the front of the queue, i.e 3.

Page faults = 4

 When page 5 comes, it is also not present and hence a page fault occurs. The
front of the queue i.e 1 is removed.

Page faults = 5

 When page 1 comes, it is not found in memory and again a page fault occurs.
The front of the queue i.e 2 is removed.

Page faults = 6

 When page 3 comes, it is again not found in memory, a page fault occurs,
and page 6 is removed being on top of the queue

Total page faults = 7

Belady's anomaly: Generally if we increase the number of frames in the memory,


the number of page faults should decrease due to obvious reasons. Belady’s
anomaly refers to the phenomena where increasing the number of frames in memory,
increases the page faults as well.

Advantages

 Simple to understand and implement


 Does not cause more overhead

Disadvantages

 Poor performance
 Doesn’t use the frequency of the last used time and just simply replaces the
oldest page.
 Suffers from Belady’s anomaly.

2. Optimal Page Replacement in OS

Optimal page replacement is the best page replacement algorithm as this


algorithm results in the least number of page faults. In this algorithm, the pages are
replaced with the ones that will not be used for the longest duration of time in the
future. In simple terms, the pages that will be referred farthest in the future are
replaced in this algorithm.

Example:

19 | P a g e
Let’s take the same page reference string 3, 1, 2, 1, 6, 5, 1, 3 with 3-page
frames as we saw in FIFO. This also helps you understand how Optimal Page
replacement works the best.

 Initially, since all the slots are empty, pages 3, 1, 2 cause a page fault and
take the empty slots.

Page faults = 3

 When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3

 When page 6 comes, it is not in the memory, so a page fault occurs and 2 is
removed as it is not going to be used again.

Page faults = 4

 When page 5 comes, it is also not in the memory and causes a page fault.
Similar to above 6 is removed as it is not going to be used again.

Page faults = 5

 When page 1 and page 3 come, they are in the memory so no page fault
occurs.

Total page faults = 5

Advantages

 Excellent efficiency
 Less complexity
 Easy to use and understand
 Simple data structures can be used to implement
 Used as the benchmark for other algorithms

Disadvantages

20 | P a g e
 More time consuming
 Difficult for error handling
 Need future awareness of the programs, which is not possible every time

3. Least Recently Used (LRU) Page Replacement Algorithm

The least recently used page replacement algorithm keeps the track of usage of
pages over a period of time. This algorithm works on the basis of the principle of
locality of a reference which states that a program has a tendency to access the same
set of memory locations repetitively over a short period of time. So pages that have
been used heavily in the past are most likely to be used heavily in the future also.

In this algorithm, when a page fault occurs, then the page that has not been
used for the longest duration of time is replaced by the newly requested page.

Example: Let’s see the performance of the LRU on the same reference string of 3, 1,
2, 1, 6, 5, 1, 3 with 3-page frames:

 Initially, since all the slots are empty, pages 3, 1, 2 cause a page fault and
take the empty slots.

Page faults = 3

 When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3

 When page 6 comes, it is not in the memory, so a page fault occurs and the
least recently used page 3 is removed.

Page faults = 4

 When page 5 comes, it again causes a page fault and page 1 is removed as it
is now the least recently used page.

Page faults = 5
21 | P a g e
 When page 1 comes again, it is not in the memory and hence page 2 is
removed according to the LRU.

Page faults = 6

 When page 3 comes, the page fault occurs again and this time page 6 is
removed as the least recently used one.

Total page faults = 7

Now in the above example, the LRU causes the same page faults as the FIFO,
but this may not always be the case as it will depend upon the series, the number of
frames available in memory, etc. In fact, on most occasions, LRU is better than FIFO.

Advantages

 It is open for full analysis


 Doesn’t suffer from Belady’s anomaly
 Often more efficient than other algorithms

Disadvantages

 It requires additional data structures to be implemented


 More complex
 High hardware assistance is required

4. Last In First Out (LIFO) Page Replacement Algorithm

This is the Last in First Out algorithm and works on LIFO principles. In this
algorithm, the newest page is replaced by the requested page. Usually, this is done
through a stack, where we maintain a stack of pages currently in the memory with
the newest page being at the top. Whenever a page fault occurs, the page at the top
of the stack is replaced.

Example: Let’s see how the LIFO performs for our example string of 3, 1, 2, 1, 6, 5,
1, 3 with 3-page frames:

22 | P a g e
 Initially, since all the slots are empty, page 3,1,2 causes a page fault and
takes the empty slots.

Page faults = 3

 When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3

 When page 6 comes, the page fault occurs and page 2 is removed as it is on
the top of the stack and is the newest page.

Page faults = 4

 When page 5 comes, it is not in the memory, which causes a page fault, and
hence page 6 is removed being on top of the stack.

Page faults = 5

 When page 1 and page 3 come, they are in memory already, hence no page
fault occurs.

Total page faults = 5

If we notice, this is the same number of page faults as of the Optimal page
replacement algorithm. So we can say that for this series of pages, this is the best
algorithm that can be implemented without the prior knowledge of future references.

23 | P a g e
Advantages

 Simple to understand
 Easy to implement
 No overhead

Disadvantages

 Does not consider Locality principle, hence may produce worst performance
 The old pages may reside in memory forever even if they are not used

5. Random Page Replacement in OS

This algorithm, as the name suggests, chooses any random page in the memory
to be replaced by the requested page. This algorithm can behave like any of the
algorithms based on the random page chosen to be replaced.

Example: Suppose we choose to replace the middle frame every time a page fault
occurs. Let’s see how our series of 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames perform
with this algorithm:

 Initially, since all the slots are empty, page 3,1,2 causes a page fault and
takes the empty slots

Page faults = 3

 When page 1 comes, it is in the memory and no page fault occurs.

Page faults = 3

 When page 6 comes, the page fault occurs, we replace the middle element i.e 1
is removed.

Page faults = 4

 When page 5 comes, the page fault occurs again and middle element 6 is
removed

Page faults = 5

24 | P a g e
 When page 1 comes, there is again a page fault and again the middle element 5
is removed

Page faults = 6

 When page 3 comes, it is in memory, hence no page fault occurs.

Total page faults = 6

As we can see, the performance is not the best, but it's also not the worst. The
performance in the random replacement algorithm depends on the choice of the page
chosen at random.

Advantages

 Easy to understand and implement


 No extra data structure needed to implement
 No overhead

Disadvantages

 It Can not be analyzed, may produce different performances for the same series
 It Can suffer from Belady’s anomaly

Conclusion

 The objective of page replacement algorithms is to minimize the page faults

 FIFO page replacement algorithm replaces the oldest page in the memory

 Optimal page replacement algorithm replaces the page which will be referred
farthest in the future

 LRU page replacement algorithm replaces the page that has not been used for
the longest duration of time

 LIFO page replacement algorithm replaces the newest page in memory

 Random page replacement algorithm replaces any page at random

 Optimal page replacement algorithm is considered to be the most effective


algorithm but cannot be implemented in practical scenarios due to various
limitations

25 | P a g e
Frame Allocation

An important aspect of operating systems, virtual memory is implemented


using demand paging. Demand paging necessitates the development of a page-
replacement algorithm and a frame allocation algorithm. Frame allocation
algorithms are used if you have multiple processes; it helps decide how many
frames to allocate to each process.
There are various constraints to the strategies for the allocation of frames:
 You cannot allocate more than the total number of available
frames.
 At least a minimum number of frames should be allocated to each
process. This constraint is supported by two reasons. The first
reason is, as less number of frames are allocated, there is an
increase in the page fault ratio, decreasing the performance of the
execution of the process. Secondly, there should be enough
frames to hold all the different pages that any single instruction
can reference.

Frame allocation algorithms

The two algorithms commonly used to allocate frames to a process are:


1. Equal allocation: In a system with x frames and y processes, each
process gets equal number of frames, i.e. x/y. For instance, if the
system has 48 frames and 9 processes, each process will get 5 frames.
The three frames which are not allocated to any process can be used
as a free-frame buffer pool.
 Disadvantage: In systems with processes of varying sizes, it does
not make much sense to give each process equal frames. Allocation
of a large number of frames to a small process will eventually lead to
the wastage of a large number of allocated unused frames.
2. Proportional allocation: Frames are allocated to each process
according to the process size.

For a process p i of size si, the number of allocated frames is ai =


(si/S)*m, where S is the sum of the sizes of all the processes and m
is the number of frames in the system. For instance, in a system with
62 frames, if there is a process of 10KB and another process of 127KB,
then the first process will be allocated (10/137)*62 = 4 frames and
the other process will get (127/137)*62 = 57 frames.
 Advantage: All the processes share the available frames according
to their needs, rather than equally.

Global vs Local Allocation

The number of frames allocated to a process can also dynamically change


depending on whether you have used global replacement or local replacement for
replacing pages in case of a page fault.

26 | P a g e
1. Local replacement: When a process needs a page which is not in the memory,
it can bring in the new page and allocate it a frame from its own set of allocated
frames only.
 Advantage: The pages in memory for a particular process and the page fault
ratio is affected by the paging behavior of only that process.
 Disadvantage: A low priority process may hinder a high priority process by
not making its frames available to the high priority process.

2. Global replacement: When a process needs a page which is not in the


memory, it can bring in the new page and allocate it a frame from the set of all
frames, even if that frame is currently allocated to some other process; that is,
one process can take a frame from another.
 Advantage: Does not hinder the performance of processes and hence results
in greater system throughput.
 Disadvantage: The page fault ratio of a process can not be solely controlled
by the process itself. The pages in memory for a process depends on the
paging behavior of other processes as well.

Thrashing
Thrashing is when the page fault and swapping happens very frequently at a
higher rate, and then the operating system has to spend more time swapping these
pages. This state in the operating system is known as thrashing. Because of
thrashing, the CPU utilization is going to be reduced or negligible.

The basic concept involved is that if a process is allocated too few frames,
then there will be too many and too frequent page faults. As a result, no valuable
work would be done by the CPU, and the CPU utilization would fall drastically.

The long-term scheduler would then try to improve the CPU utilization by
loading some more processes into the memory, thereby increasing the degree of
multiprogramming. Unfortunately, this would result in a further decrease in the CPU
utilization, triggering a chained reaction of higher page faults followed by an
increase in the degree of multiprogramming, called thrashing.

27 | P a g e
Algorithms during Thrashing

Whenever thrashing starts, the operating system tries to apply either the
Global page replacement Algorithm or the Local page replacement algorithm.

1. Global Page Replacement

Since global page replacement can bring any page, it tries to bring more
pages whenever thrashing is found. But what actually will happen is that no
process gets enough frames, and as a result, the thrashing will increase more
and more. Therefore, the global page replacement algorithm is not suitable
when thrashing happens.

2. Local Page Replacement

Unlike the global page replacement algorithm, local page replacement


will select pages which only belong to that process. So there is a chance to
reduce the thrashing. But it is proven that there are many disadvantages if
we use local page replacement. Therefore, local page replacement is just an
alternative to global page replacement in a thrashing scenario.

Causes of Thrashing

Programs or workloads may cause thrashing, and it results in severe


performance problems, such as:

o If CPU utilization is too low, we increase the degree of multiprogramming by


introducing a new system. A global page replacement algorithm is used. The
CPU scheduler sees the decreasing CPU utilization and increases the degree of
multiprogramming.
o CPU utilization is plotted against the degree of multiprogramming.

o As the degree of multiprogramming increases, CPU utilization also increases.

o If the degree of multiprogramming is increased further, thrashing sets in, and


CPU utilization drops sharply.
o So, at this point, to increase CPU utilization and to stop thrashing, we must
decrease the degree of multiprogramming.

Elimination of Thrashing

Thrashing has some negative impacts on hard drive health and system
performance. Therefore, it is necessary to take some actions to avoid it. To resolve
the problem of thrashing, here are the following methods, such as:

28 | P a g e
o Adjust the swap file size:If the system swap file is not configured correctly,
disk thrashing can also happen to you.
o Increase the amount of RAM: As insufficient memory can cause disk
thrashing, one solution is to add more RAM to the laptop. With more memory,
your computer can handle tasks easily and don't have to work excessively.
Generally, it is the best long-term solution.
o Decrease the number of applications running on the computer: If there
are too many applications running in the background, your system resource
will consume a lot. And the remaining system resource is slow that can result
in thrashing. So while closing, some applications will release some resources
so that you can avoid thrashing to some extent.
o Replace programs: Replace those programs that are heavy memory
occupied with equivalents that use less memory.

Techniques to Prevent Thrashing

The Local Page replacement is better than the Global Page replacement, but
local page replacement has many disadvantages, so it is sometimes not helpful.
Therefore below are some other techniques that are used to handle thrashing:

1. Locality Model

A locality is a set of pages that are actively used together. The locality
model states that as a process executes, it moves from one locality to
another. Thus, a program is generally composed of several different localities
which may overlap.

For example, when a function is called, it defines a new locality where


memory references are made to the function call instructions, local and global
variables, etc. Similarly, when the function is exited, the process leaves this
locality.

2. Working-Set Model

This model is based on the above-stated concept of the Locality Model.

The basic principle states that if we allocate enough frames to a process


to accommodate its current locality, it will only fault whenever it moves to
some new locality. But if the allocated frames are lesser than the size of the
current locality, the process is bound to thrash.

According to this model, based on parameter A, the working set is


defined as the set of pages in the most recent 'A' page references. Hence, all
the actively used pages would always end up being a part of the working set.

29 | P a g e
The accuracy of the working set is dependent on the value of parameter
A. If A is too large, then working sets may overlap. On the other hand, for
smaller values of A, the locality might not be covered entirely.

If D is the total demand for frames and WSS i is the working set size for
process i,

D = ⅀ WSSi

Now, if 'm' is the number of frames available in the memory, there are
two possibilities:

1. D>m, i.e., total demand exceeds the number of frames, then


thrashing will occur as some processes would not get enough
frames.

2. D<=m, then there would be no thrashing.

If there are enough extra frames, then some more processes can be
loaded into the memory. On the other hand, if the summation of working set
sizes exceeds the frames' availability, some of the processes have to be
suspended (swapped out of memory).

This technique prevents thrashing along with ensuring the highest degree of
multiprogramming possible. Thus, it optimizes CPU utilization.

3. Page Fault Frequency

A more direct approach to handle thrashing is the one that uses the Page-Fault
Frequency concept.

The problem associated with thrashing is the high page fault rate, and thus,
the concept here is to control the page fault rate.

If the page fault rate is too high, it indicates that the process has too few
frames allocated to it. On the contrary, a low page fault rate indicates that the
process has too many frames.

30 | P a g e
Upper and lower limits can be established on the desired page fault rate, as
shown in the diagram.

If the page fault rate falls below the lower limit, frames can be removed from
the process. Similarly, if the page faults rate exceeds the upper limit, more frames
can be allocated to the process.

In other words, the graphical state of the system should be kept limited to the
rectangular region formed in the given diagram.

If the page fault rate is high with no free frames, some of the processes can
be suspended and allocated to them can be reallocated to other processes. The
suspended processes can restart later.

Memory mapped file

Memory mapping refers to process ability to access files on disk the same way
it accesses dynamic memory. It is obvious that accessing RAM is much faster than
accessing disk via read and write system calls. This technique saves user
applications IO overhead and buffering but it also has its own drawbacks as we will
see later.

The operating system utilizes virtual memory techniques to do the trick. The
OS splits the memory mapped file into pages (similar to process pages) and loads
the requested pages into physical memory on demand. If a process references an
address (i.e. location within the file) that does not exists, a page fault occurs and
the operating system brings the missing page into memory.

Memory mapped files sound like an efficient method to access files on disk. Is
it a good option always? That is not necessarily the case. Here are few scenarios
where memory mapping is appealing.

 Randomly accessing a huge file once (or a couple of times).


 Loading a small file once then randomly accessing the file frequently.
 Sharing a file or a portion of a file between multiple applications.
 When the file contains data of great importance to the application.

Advantages
Memory mapping is an excellent technique that has various benefits.
Examples below.

 Efficiency: when dealing with large files, no need to read the entire file into
memory first.
 Fast: accessing virtual memory is much faster than accessing disk.
 Sharing: facilitates data sharing and interprocess communication.
 Simplicity: dealing with memory as opposed to allocating space, copying data
and deallocating space.

31 | P a g e
Disadvantages
The drawbacks of memory mapping are:

 Memory mapping is generally good for binary files; however reading formatted
binary file types with custom headers such as TIFF can be problematic.
 Memory mapping text files is not such an appealing task as it may require
proper text handling and conversion.
 The notion that a memory mapped file has always better performance should
not be taken for granted. Recall that accessing file in memory may generate a
lot of page faults which is bad.
 Memory footprint is larger than that of traditional file IO. In other words, user
applications have no control over memory allocation.
 Expanding file size is not easy to implement because a memory mapped file is
assumed to be fixed in size.
Difference between memory mapped file and shared memory

 Shared memory is a RAM only form of interprocess communication (IPC) that


does not require disk operations.
 Moreover, IPC can be implemented using memory mapped file technique,
however it is not as fast as a pure memory only IPC.
Memory mapped file vs named pipe

 Named pipes allow one process to communicate with another process in real
time on the same computer or through a network. It is based on client server
communication model with a sender and a listener.
 Behind the scenes, named pipes may implement an IPC shared memory.

Kernel memory allocation


The Two strategies for managing free memory that is assigned to kernel
processes are
1. Buddy system –
Buddy allocation system is an algorithm in which a larger memory block
is divided into small parts to satisfy the request. This algorithm is used to give
best fit. The two smaller parts of block are of equal size and called as buddies.
In the same manner one of the two buddies will further divide into smaller
parts until the request is fulfilled. Benefit of this technique is that the two
buddies can combine to form the block of larger size according to the memory
request.

Example – If the request of 25Kb is made then block of size 32Kb is allocated.

32 | P a g e
Four Types of Buddy System –

1. Binary buddy system


2. Fibonacci buddy system
3. Weighted buddy system
4. Tertiary buddy system
Why buddy system?

If the partition size and procees size are different then poor match occurs and may
use space inefficiently.
It is easy to implement and efficient then dynamic allocation.
Binary buddy system –

The buddy system maintains a list of the free blocks of each size (called a free list),
so that it is easy to find ablock of the desired size, if one is available. If no block of
the requested size is available, Allocate searches for the first nonempty list for
blocks of atleast the size requested. In either case, a block is removed from the
free list.
Example – Assume the size of memory segment is initially 256kb and the kernel
rquests 25kb of memory. The segment is initially divided into two buddies. Let we
call A1 and A2 each 128kb in size. One of these buddies is further divided into two
64kb buddies let say B1 and B2. But the next highest power of 25kb is 32kb so,
either B1 or B2 is further divided into two 32kb buddies(C1 and C2) and finally one
of these buddies is used to satisfy the 25kb request. A split block can only be
merged with its unique buddy block, which then reforms the larger block they were
split from.
Fibonacci buddy system –
33 | P a g e
This is the system in which blocks are divided into sizes which are fibonacci
numbers. It satisfy the following relation:
Zi = Z(i-1)+Z(i-2)
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 144, 233, 377, 610. The address calculation for
the binary and weighted buddy systems is straight forward, but the original
procedure for the Fibonacci buddy system was either limited to a small, fixed
number of block sizes or a time consuming computation.
Advantages –
 In comparison to other simpler techniques such as dynamic allocation, the buddy
memory system has little external fragmentation.
 The buddy memory allocation system is implemented with the use of a binary tree
to represent used or unused split memory blocks.
 The buddy system is very fast to allocate or deallocate memory.
 In buddy systems, the cost to allocate and free a block of memory is low compared
to that of best-fit or first-fit algorithms.
 Other advantage is coalescing.
 Address calculation is easy.
What is coalescing?

It is defined as how quickly adjacent buddies can be combined to form larger


segments this is known as coalescing.
For example, when the kernel releases the C1 unit it was allocated, the system can
coalesce C1 and C2 into a 64kb segment. This segment B1 can in turn be coalesced
with its buddy B2 to form a 128kb segment. Ultimately we can end up with the
original 256kb segment.
Drawback –

The main drawback in buddy system is internal fragmentation as larger block of


memory is acquired then required. For example if a 36 kb request is made then it
can only be satisfied by 64 kb segment and reamining memory is wasted.
2. Slab Allocation –
A second strategy for allocating kernel memory is known as slab allocation. It
eliminates fragmentation caused by allocations and deallocations. This method is
used to retain allocated memory that contains a data object of a certain type for
reuse upon subsequent allocations of objects of the same type. In slab allocation
memory chunks suitable to fit data objects of certain type or size are preallocated.
Cache does not free the space immediately after use although it keeps track of data
which are required frequently so that whenever request is made the data will reach
very fast. Two terms required are:
 Slab – A slab is made up of one or more physically contiguous pages. The slab is
the actual container of data associated with objects of the specific kind of the
containing cache.
 Cache – Cache represents a small amount of very fast memory. A cache consists of
one or more slabs. There is a single cache for each unique kernel data structure.

34 | P a g e
Example –
 A separate cache for a data structure representing processes descriptors
 Separate cache for file objects
 Separate cache for semaphores etc.

Each cache is populated with objects that are instantiations of the kernel data
structure the cache represents. For example the cache representing semaphores
stores instances of semaphore objects, the cache representing process descriptors
stores instances of process descriptor objects.
Implementation –

The slab allocation algorithm uses caches to store kernel objects. When a
cache is created a number of objects which are initially marked as free are allocated
to the cache. The number of objects in the cache depends on size of the associated
slab.

Example – A 12 kb slab (made up of three contiguous 4 kb pages) could store


six 2 kb objects. Initially all objects in the cache are marked as free. When a new
object for a kernel data structure is needed, the allocator can assign any free object
from the cache to satisfy the request. The object assigned from the cache is marked
as used.

35 | P a g e
In linux, a slab may in one of three possible states:

1. Full – All objects in the slab are marked as used


2. Empty – All objects in the slab are marked as free
3. Partial – The slab consists of both

The slab allocator first attempts to satisfy the request with a free object in a
partial slab. If none exists, a free object is assigned from an empty slab. If no
empty slabs are available, a new slab is allocated from contiguous physical pages
and assigned to a cache.
Benefits of slab allocator –

 No memory is wasted due to fragmentation because each unique kernel data


structure has an associated cache.
 Memory request can be satisfied quickly.
 The slab allocating scheme is particularly effective for managing when objects are
frequently allocated or deallocated. The act of allocating and releasing memory can
be a time consuming process. However, objects are created in advance and thus
can be quickly allocated from the cache. When the kernel has finished with an
object and releases it, it is marked as free and return to its cache, thus making it
immediately available for subsequent request from the kernel.

36 | P a g e

You might also like