0% found this document useful (0 votes)
2 views

Unit-III Updated

The document outlines the syllabus and key concepts of memory management in operating systems, including techniques such as swapping, contiguous memory allocation, paging, and segmentation. It discusses the advantages and disadvantages of these methods, as well as memory allocation strategies and page replacement algorithms. The importance of memory management for efficient system performance and data integrity is emphasized throughout.

Uploaded by

Prajwal Tilekar
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Unit-III Updated

The document outlines the syllabus and key concepts of memory management in operating systems, including techniques such as swapping, contiguous memory allocation, paging, and segmentation. It discusses the advantages and disadvantages of these methods, as well as memory allocation strategies and page replacement algorithms. The importance of memory management for efficient system performance and data integrity is emphasized throughout.

Uploaded by

Prajwal Tilekar
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

MIT School of Computing

Department of Computer Science & Engineering

Third Year Engineering

21BTIS501-Operating System

Class - T.Y. (SEM-I)


PLD

Unit - III

Memory Management
AY 2024-2025 SEM-II

1
MIT School of Computing
Department of Computer Science & Engineering

Unit-IV Syllabus

● Background of Memory Management, Swapping,


Contiguous Memory Allocation, Paging, Structure
PLD
of the Page Table, Segmentation, Copy~on-Write,
Page Replacement, Allocation of Frames,
Thrashing.

2
MIT School of Computing
Department of Computer Science & Engineering

Why Memory Management is Required?


• Allocate and de-allocate memory before
and after process execution.
• To keep track of used
PLD memory space by
processes.
• To minimize fragmentation issues.
• To proper utilization of main memory.
• To maintain data integrity while executing
of process.
3
MIT School of Computing
Department of Computer Science & Engineering

What is Memory Management in an Operating System?

• It is technique of controlling and managing the


functionality of RAM.
• It is used for achieving
PLD
better concurrency,
system performance, and memory utilization.
• Used to moves processes from primary memory
to secondary memory and vice versa.
• To keeps track of available memory, memory
allocation, and unallocated.
4
MIT School of Computing
Department of Computer Science & Engineering

Swapping in an Operating System


• A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution.
– Total physical memory space of processes can exceed
PLD
physical memory
• Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide
direct access to these memory images
• Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped
out so higher-priority process can be loaded and executed.
5
MIT School of Computing
Department of Computer Science & Engineering

Swapping in an Operating System


• Major part of swap time is transfer time; total transfer time
is directly proportional to the amount of memory swapped.
• System maintains a ready queue of ready-to-run processes
which have memory images on disk.
PLD
• Modified versions of swapping are found on many systems
(i.e., UNIX, Linux, and Windows)
– Swapping normally disabled
– Started if more than threshold amount of memory
allocated
– Disabled again once memory demand reduced below
threshold
6
MIT School of Computing
Department of Computer Science & Engineering

Schematic View of Swapping

PLD

7
MIT School of Computing
Department of Computer Science & Engineering

Advantages of Swapping

• Efficient Memory Utilization.


• Better System Performance.
• Reduced Process Blocking.
• Flexibility.

8
MIT School of Computing
Department of Computer Science & Engineering

Disadvantages of Swapping

• Performance Overhead
• Storage Overhead
• Data Integrity Issues
• Increased Disk Activity

9
MIT School of Computing
Department of Computer Science & Engineering

Contiguous Memory Allocation


• Main memory must support both OS and user
processes.
• Limited resource, must allocate efficiently
• Contiguous allocation is one early method
• Main memory usually into two partitions:
• Resident operating system, usually held in low
memory with interrupt vector
• User processes then held in high memory.
• Each process contained in single contiguous
section of memory 10
MIT School of Computing
Department of Computer Science & Engineering

Contiguous Memory Allocation


• Relocation registers used to protect user processes
from each other, and from changing operating-system
code and data
1. Base register contains value of smallest physical
address
2. Limit register contains range of logical addresses –
each logical address must be less than the limit register
3. MMU maps logical address dynamically
4. Can then allow actions such as kernel code being
transient and kernel changing size
11
MIT School of Computing
Department of Computer Science & Engineering

Contiguous Memory Allocation

12
MIT School of Computing
Department of Computer Science & Engineering

Contiguous Memory Allocation

• This allocation can be done in two ways:


1. Fixed-size Partition Scheme
2. Variable-size Partition Scheme

13
MIT School of Computing
Department of Computer Science & Engineering

1. Fixed-size Partition Scheme


• Another name for this is static partitioning. In this
case, the system gets divided into multiple fixed-
sized partitions.
• In this type of scheme, every partition may consist
of exactly one process.
• This very process limits the extent at which
multiprogramming would occur, since the total
number of partitions decides the total number of
processes.
• Read more on fixed-sized partitions here.
14
MIT School of Computing
Department of Computer Science & Engineering

1. Fixed-size Partition Scheme

15
MIT School of Computing
Department of Computer Science & Engineering

2. Variable-size Partition Scheme


• Dynamic partitioning is another name for this. The
scheme allocation in this type of partition is done
dynamically.
• Here, the size of every partition isn’t declared
initially.
• Only once we know the process size, will we know
the size of the partitions.
• But in this case, the size of the process and the
partition is equal; thus, it helps in preventing internal
fragmentation.
16
MIT School of Computing
Department of Computer Science & Engineering

2. Variable-size Partition Scheme


• On the other hand, when a process is smaller than
its partition, some size of the partition gets wasted
(internal fragmentation).
• It occurs in static partitioning, and dynamic
partitioning solves this issue. Read more on
dynamic partitions here.

17
MIT School of Computing
Department of Computer Science & Engineering

2. Variable-size Partition Scheme

18
MIT School of Computing
Department of Computer Science & Engineering

Memory Allocation
• Memory allocation is a process by which
computer programs are assigned memory or
space.
• Here, main memory is divided into two types
of partitions
1. Low Memory – Operating system resides in
this type of memory.
2. High Memory– User processes are held in
high memory.
19
MIT School of Computing
Department of Computer Science & Engineering

Partition Allocation
• Memory is divided into different blocks or
partitions.
• Each process is allocated according to the
requirement.
• Partition allocation is an ideal method to
avoid internal fragmentation.

20
MIT School of Computing
Department of Computer Science & Engineering

Partition Allocation
• Below are the various partition allocation
schemes :
1. First Fit: In this type fit, the partition is
allocated, which is the first sufficient block
from the beginning of the main memory.
2. Best Fit: It allocates the process to the
partition that is the first smallest partition
among the free partitions.

21
MIT School of Computing
Department of Computer Science & Engineering

Partition Allocation

3. Worst Fit: It allocates the process to the


partition, which is the largest sufficient
freely available partition in the main
memory.
4. Next Fit: It is mostly similar to the first Fit,
but this Fit, searches for the first
sufficient partition from the last
allocation point.

22
MIT School of Computing
Department of Computer Science & Engineering

Advantages of Contiguous Memory Allocation

1. It supports a user’s random access to files.


2. The user gets excellent read performance.
3. It is fairly simple to implement.

23
MIT School of Computing
Department of Computer Science & Engineering

Disadvantages of Contiguous Memory Allocation


1. Having a file grow might be somewhat
difficult.
2. The disk may become fragmented.

24
MIT School of Computing
Department of Computer Science & Engineering

Paging in Operating System

• Logical address space of a process can be noncontiguous;


process is allocated physical memory whenever the latter is
available.
• Divide physical memory into fixed-sized blocks called frames
(size is power of 2, between 512 bytes and 8192 bytes).
• Divide logical memory into blocks of same size called pages.
• Keep track of all free frames.
• To run a program of size n pages, need to find n free frames and
load program.
• Set up a page table to translate logical to physical addresses.
• Internal fragmentation.

25
MIT School of Computing
Department of Computer Science & Engineering

Paging in Operating System

26
MIT School of Computing
Department of Computer Science & Engineering

Segmentation in Operating System


• Segmentation is a technique to break memory into
logical pieces where each piece represents a group of
related information.
• Segmentation is one of the most common ways to
achieve memory protection
• Because internal fragmentation of pages takes
place ,the user’s view of memory is lost
• The user will view the memory as a combination of
segments • In this type, memory address used are not
contiguous.
• Each memory segment is associated with specific
length and set of permission 27
MIT School of Computing
Department of Computer Science & Engineering

Segmentation in Operating System

28
MIT School of Computing
Department of Computer Science & Engineering

Segmentation in Operating System


• When a process tries to access the memory it is first
checked to see whether it has the required permission to
access the particular memory segment by the particular
memory.
• The details about each segment are stored in a table
called a segment table.
• Segment table is stored in one (or many) of the
segments. Segment table contains mainly two
information about segment:
1. Base: It is the base address of the segment
2.Limit: It is the length of the segment.
29
MIT School of Computing
Department of Computer Science & Engineering

Difference between Paging and Segmentation

30
MIT School of Computing
Department of Computer Science & Engineering

Need of Virtual Memory


• In case, if a computer running the Windows operating
system needs more memory or RAM than the memory
installed in the system then it uses a small portion of
the hard drive for this purpose.
• Suppose there is a situation when your computer does
not have space in the physical memory, then it writes
things that it needs to remember into the hard disk in a
swap file and that as virtual memory.

31
MIT School of Computing
Department of Computer Science & Engineering

Disadvantages of Virtual Memory


• Special hardware for address translation - some
instructions may require 5-6 address translations!
• Difficulties in restarting instructions
(chip/microcode complexity).
• Complexity of OS.
• Overhead - a Page-fault is an expensive operation
in terms of both CPU and I/O overhead.
• Page-faults bad for real time.
• Thrashing problem
32
MIT School of Computing
Department of Computer Science & Engineering

Advantages of Virtual Memory


• It can handle twice as many addresses as main
memory.
• It enables more applications to be used at once.
• It has increased security because of memory
isolation.
• It enables multiple larger applications to run
simultaneously.
• Allocating memory is relatively inexpensive.
• It does not need external fragmentation.
• Data can be moved automatically.
33
MIT School of Computing
Department of Computer Science & Engineering

Page Replacement in Operating System


• Prevent over-allocation of memory by
modifying page-fault service routine to include
page replacement
• Use modify (dirty) bit to reduce overhead of
page transfers – only modified pages are written
to disk
• Page replacement completes separation between
logical memory and physical memory – large
virtual memory can be provided on a smaller
physical memory 34
MIT School of Computing
Department of Computer Science & Engineering

Page Replacement Algorithm Workflow

35
MIT School of Computing
Department of Computer Science & Engineering

Basic Page Replacement

36
MIT School of Computing
Department of Computer Science & Engineering

Page Replacement in Operating System


• Page Replacement Algorithms in Operating
Systems
1. First In First Out (FIFO)
2. Least Recently Used (LRU)
3. Optimal Page Replacement

37
MIT School of Computing
Department of Computer Science & Engineering

First-In-First-Out (FIFO) Algorithm

38
MIT School of Computing
Department of Computer Science & Engineering

First-In-First-Out (FIFO) Algorithm

39
MIT School of Computing
Department of Computer Science & Engineering

First-In-First-Out (FIFO) Algorithm

40
MIT School of Computing
Department of Computer Science & Engineering

First-In-First-Out (FIFO) Algorithm


Advantages
• Simple and easy to implement.
• Low overhead.
Disadvantages
• Poor performance.
• Doesn’t consider the frequency of use or last used
time, simply replaces the oldest page.
• Suffers from Belady’s Anomaly(i.e. more page faults
when we increase the number of page frames).

41
MIT School of Computing
Department of Computer Science & Engineering

Least Recently Used (LRU)Algorithm


• Least Recently Used page replacement
algorithm keeps track of page usage over a
short period of time.
• It works on the idea that the pages that have
been most heavily used in the past are most
likely to be used heavily in the future too.
• In LRU, whenever page replacement happens,
the page which has not been used for the
longest amount of time is replaced.
42
MIT School of Computing
Department of Computer Science & Engineering

Least Recently Used (LRU)Algorit hm


• Advantages
– Efficient.
– Doesn't suffer from Belady’s Anomaly.
• Disadvantages
– Complex Implementation.
– Expensive.
– Requires hardware support.

43
MIT School of Computing
Department of Computer Science & Engineering

Pseudocode for LRU


Let us say that s is the main memory's capacity to hold pages
and pages is the list containing all pages currently present in
the main memory.
1.Iterate through the referenced pages.
1. If the current page is already present in pages:
1.Remove the current page from pages.
2.Append the current page to the end of pages.
3.Increment page hits.
2. Else:
1.Increment page faults.
2.If pages contains fewer pages than its capacity s:
1.Append current page into pages.
3.Else:
1.Remove the first page from pages.
2.Append the current page at the end of pages.
2.Return the number of page hits and page faults.
44
MIT School of Computing
Department of Computer Science & Engineering

Optimal Page Replacement


• Optimal Page Replacement algorithm is the best page
replacement algorithm as it gives the least number of
page faults.
• It is also known as OPT, clairvoyant replacement
algorithm, or Belady’s optimal page replacement
policy.
• In this algorithm, pages are replaced which would not
be used for the longest duration of time in the future,
i.e., the pages in the memory which are going to be
referred farthest in the future are replaced.
45
MIT School of Computing
Department of Computer Science & Engineering

Optimal Page Replacement


• This algorithm was introduced long back and is
difficult to implement because it requires future
knowledge of the program behavior.
• However, it is possible to implement optimal page
replacement on the second run by using the page
reference information collected on the first run.

46
MIT School of Computing
Department of Computer Science & Engineering

Optimal Page Replacement


• Advantages
– Easy to Implement.
– Simple data structures are used.
– Highly efficient.
• Disadvantages
– Requires future knowledge of the program.
– Time-consuming.

47
MIT School of Computing
Department of Computer Science & Engineering

Allocation of frames in OS
• The main memory of the system is divided into frames.
• The OS has to allocate a sufficient number of frames for
each process and to do so, the OS uses various algorithms.
• The five major ways to allocate frames are as follows:
1. Proportional frame allocation
2. Priority frame allocation
3. Global replacement allocation
4. Local replacement allocation
5. Equal frame allocation

48
MIT School of Computing
Department of Computer Science & Engineering

Allocation of frames in OS

• The proportional frame allocation algorithm allocates


frames based on the size that is necessary for the execution
and the number of total frames the memory has.
• The only disadvantage of this algorithm is it does not
allocate frames based on priority. This situation is solved
by Priority frame allocation.

49
MIT School of Computing
Department of Computer Science & Engineering

1. Proportional frame allocation

• The proportional frame allocation algorithm allocates


frames based on the size that is necessary for the execution
and the number of total frames the memory has.
• The only disadvantage of this algorithm is it does not
allocate frames based on priority. This situation is solved
by Priority frame allocation.

50
MIT School of Computing
Department of Computer Science & Engineering

2. Priority frame allocation

• Priority frame allocation allocates frames based


on the priority of the processes and the number
of frame allocations.
• If a process is of high priority and needs more
frames then the process will be allocated that
many frames. The allocation of lower priority
processes occurs after it.

51
MIT School of Computing
Department of Computer Science & Engineering

3. Global replacement allocation

• When there is a page fault in the operating


system, then the global replacement allocation
takes care of it.
• The process with lower priority can give frames
to the process with higher priority to avoid page
faults.

52
MIT School of Computing
Department of Computer Science & Engineering

4. Local replacement allocation

• In local replacement allocation, the frames of


pages can be stored on the same page.
• It doesn’t influence the behavior of the process
as it did in global replacement allocation.

53
MIT School of Computing
Department of Computer Science & Engineering

5. Equal frame allocation

• In equal frame allocation, the processes are


allocated equally among the processes in the
operating system.
• The only disadvantage in equal frame allocation
is that a process requires more frames for
allocation for execution and there are only a set
number of frames.

54
MIT School of Computing
Department of Computer Science & Engineering

What is Fragmentation?
• Processes are stored and removed from memory, which
creates free memory space, which are too small to use by
other processes.
• After sometimes, that processes not able to allocate to
memory blocks because its small size and memory blocks
always remain unused is called fragmentation.
• This type of problem happens during a dynamic memory
allocation system when free blocks are quite small, so it is
not able to fulfill any request.

55
MIT School of Computing
Department of Computer Science & Engineering

What is Fragmentation?
• Two types of Fragmentation methods are:
1. External fragmentation
External fragmentation can be reduced by rearranging
memory contents to place all free memory together in a
single block.
2. Internal fragmentation
The internal fragmentation can be reduced by assigning
the smallest partition, which is still good enough to
carry the entire process.

56
MIT School of Computing
Department of Computer Science & Engineering

Thrashing in OS (Operating System)


• Thrash is a term used in computer science to describe the
poor performance of a virtual memory system when the
same pages are loaded repeatedly owing to a shortage of
main memory to store them in secondary memory.
• Thrashing happens in computer science when a computer's
virtual memory resources are overutilized, resulting in a
persistent state of paging and page faults, which inhibits
most application-level activity.
• It causes the computer's performance to decline or collapse.
The scenario can last indefinitely unless the user stops
certain running apps or active processes to free up extra
virtual memory resources. 57
MIT School of Computing
Department of Computer Science & Engineering

Algorithms during Thrashing

1. Global Page Replacement


2. Local Page Replacement

58
MIT School of Computing
Department of Computer Science & Engineering

Algorithms during Thrashing


1. Global Page Replacement
• Since global page replacement can bring any page, it
tries to bring more pages whenever thrashing is
found.
• But what actually will happen is that no process gets
enough frames, and as a result, the thrashing will
increase more and more.
• Therefore, the global page replacement algorithm is
not suitable when thrashing happens.

59
MIT School of Computing
Department of Computer Science & Engineering

Algorithms during Thrashing


2. Local Page Replacement
• Unlike the global page replacement algorithm, local
page replacement will select pages which only belong
to that process.
• So there is a chance to reduce the thrashing. But it is
proven that there are many disadvantages if we use
local page replacement.
• Therefore, local page replacement is just an
alternative to global page replacement in a thrashing
scenario.
60

You might also like