0% found this document useful (0 votes)
40 views32 pages

Memory Management in Operating Systems

The document discusses memory management in computer systems, detailing the importance of managing main memory for efficient program execution. It explains concepts such as logical and physical address spaces, memory hierarchy, types of memory, and various memory management techniques including contiguous and non-contiguous allocation. Additionally, it covers the advantages and disadvantages of memory management strategies like swapping and fragmentation, along with specific allocation algorithms like first-fit, best-fit, and worst-fit.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views32 pages

Memory Management in Operating Systems

The document discusses memory management in computer systems, detailing the importance of managing main memory for efficient program execution. It explains concepts such as logical and physical address spaces, memory hierarchy, types of memory, and various memory management techniques including contiguous and non-contiguous allocation. Additionally, it covers the advantages and disadvantages of memory management strategies like swapping and fragmentation, along with specific allocation algorithms like first-fit, best-fit, and worst-fit.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module 4

Mrs. Prachii Shrivastava


Memory Management
► The term memory can be defined as a collection of data in a specific format.
It is used to store instructions and process data. The memory comprises a
large array or group of words or bytes, each with its own location. The
primary purpose of a computer system is to execute programs. These
programs, along with the information they access, should be in the main
memory during execution. The CPU fetches instructions from memory
according to the value of the program counter.
► Memory management mostly involves management of main memory. In a
multiprogramming computer, the Operating System resides in a part of the
main memory, and the rest is used by multiple processes. The task of
subdividing the memory among different processes is called Memory
Management. Memory management is a method in the operating system to
manage operations between main memory and disk during process execution.
The main aim of memory management is to achieve efficient utilization of
memory.
Why Memory Management is Required?
► Allocate and de-allocate memory before and after process execution.
► To keep track of used memory space by processes.
► To minimize fragmentation issues.
► To proper utilization of main memory.
► To maintain data integrity while executing of process.

Logical and Physical Address Space


•Logical Address Space: An address generated by the CPU is known as a “Logical Address”. It is also
known as a Virtual address. Logical address space can be defined as the size of the process. A logical
address can be changed.
•Physical Address Space: An address seen by the memory unit (i.e. the one loaded into the memory
address register of the memory) is commonly known as a “Physical Address”. A Physical address is also
known as a Real address. The set of all physical addresses corresponding to these logical addresses is
known as Physical address space. A physical address is computed by MMU. The run-time mapping from
virtual to physical addresses is done by a hardware device Memory Management Unit(MMU). The physical
address always remains constant.
Memory Heirarchy
► Types of Memory Hierarchy
► This Memory Hierarchy Design is divided into 2 main types:
► External Memory or Secondary Memory: Comprising of Magnetic Disk,
Optical Disk, and Magnetic Tape i.e. peripheral storage devices which are
accessible by the processor via an I/O Module.
► Internal Memory or Primary Memory: Comprising of Main Memory, Cache
Memory & CPU registers. This is directly accessible by the processor.
Memory Hierarchy Design
1. Registers
Registers are small, high-speed memory units located in the CPU. They are used to
store the most frequently used data and instructions. Registers have the fastest
access time and the smallest storage capacity, typically ranging from 16 to 64 bits.
2. Cache Memory
Cache memory is a small, fast memory unit located close to the CPU. It stores
frequently used data and instructions that have been recently accessed from the
main memory. Cache memory is designed to minimize the time it takes to access
data by providing the CPU with quick access to frequently used data.
3. Main Memory
Main memory, also known as RAM (Random Access Memory), is the primary
memory of a computer system. It has a larger storage capacity than cache memory,
but it is slower. Main memory is used to store data and instructions that are
currently in use by the CPU.
Types of Main Memory

•Static RAM: Static RAM stores the binary information in flip flops and
information remains valid until power is supplied. Static RAM has a faster
access time and is used in implementing cache memory.
•Dynamic RAM: It stores the binary information as a charge on the capacitor.
It requires refreshing circuitry to maintain the charge on the capacitors after
a few milliseconds. It contains more memory cells per unit area as compared
to SRAM.

4. Secondary Storage
Secondary storage, such as hard disk drives (HDD) and solid-state drives
(SSD) , is a non-volatile memory unit that has a larger storage capacity than
main memory. It is used to store data and instructions that are not currently
in use by the CPU. Secondary storage has the slowest access time and is
typically the least expensive type of memory in the memory hierarchy.
5. Magnetic Disk
Magnetic Disks are simply circular plates that are fabricated with either a metal or a
plastic or a magnetized material. The Magnetic disks work at a high speed inside the
computer and these are frequently used.
6. Magnetic Tape
Magnetic Tape is simply a magnetic recording device that is covered with a plastic
film. Magnetic Tape is generally used for the backup of data. In the case of a magnetic
tape, the access time for a computer is a little slower and therefore, it requires some
amount of time for accessing the strip.
Characteristics of Memory Hierarchy
•Capacity: It is the global volume of information the memory can store. As we move from
top to bottom in the Hierarchy, the capacity increases.
•Access Time: It is the time interval between the read/write request and the availability of
the data. As we move from top to bottom in the Hierarchy, the access time increases.
•Performance: The Memory Hierarch design ensures that frequently accessed data is
stored in faster memory to improve system performance.
•Cost Per Bit: As we move from bottom to top in the Hierarchy, the cost per bit increases
i.e. Internal Memory is costlier than External Memory.
Characteristics of Memory Hierarchy

► Capacity: It is the global volume of information the memory can store. As we


move from top to bottom in the Hierarchy, the capacity increases.
► Access Time: It is the time interval between the read/write request and the
availability of the data. As we move from top to bottom in the Hierarchy, the
access time increases.
► Performance: The Memory Hierarch design ensures that frequently accessed
data is stored in faster memory to improve system performance.
► Cost Per Bit: As we move from bottom to top in the Hierarchy, the cost per
bit increases i.e. Internal Memory is costlier than External Memory.
Advantages of Memory Hierarchy
•Performance: Frequently used data is stored in faster memory (like cache), reducing
access time and improving overall system performance.
•Cost Efficiency: By combining small, fast memory (like registers and cache) with
larger, slower memory (like RAM and HDD), the system achieves a balance between
cost and performance. It saves the consumer’s price and time.
•Optimized Resource Utilization: Combines the benefits of small, fast memory and
large, cost-effective storage to maximize system performance.
•Efficient Data Management: Frequently accessed data is kept closer to the CPU,
while less frequently used data is stored in larger, slower memory, ensuring efficient
data handling.

Disadvantages of Memory Hierarchy


•Complex Design: Managing and coordinating data across different levels of the
hierarchy adds complexity to the system’s design and operation.
•Cost: Faster memory components like registers and cache are expensive, limiting
their size and increasing the overall cost of the system.
•Latency: Accessing data stored in slower memory (like secondary or tertiary
storage) increases the latency and reduces system performance.
•Maintenance Overhead: Managing and maintaining different types of memory adds
overhead in terms of hardware and software.
Swapping:
► To increase CPU utilization in multiprogramming, a
memory management scheme known as swapping can
be used. Swapping is the process of bringing a process
into memory and then temporarily copying it to the
disc after it has run for a while. The purpose of
swapping in an operating system is to access data on a
hard disc and move it to RAM so that application
programs can use it.
► The CPU scheduler determines which processes are
swapped in and which are swapped out. Consider a
multiprogramming environment that employs a
priority-based scheduling algorithm. When a
high-priority process enters the input queue, a
low-priority process is swapped out so the high-priority
process can be loaded and executed. When this
process terminates, the low-priority process is
swapped back into memory to continue its execution.
The below figure shows the swapping process in the
operating system:
Swapping has been subdivided into two concepts: swap-in and swap-out.
► Swap-out is a technique for moving a process from RAM to the hard disc.
► Swap-in is a method of transferring a program from a hard disc to main memory, or RAM.
Process of Swapping
► When the RAM is full and a new program needs to run, the operating system selects a program or data that is currently in RAM but not actively
being used.
► The selected data is moved to secondary storage, freeing up space in RAM for the new program
► When the swapped-out program is needed again, it can be swapped back into RAM, replacing another inactive program or data if necessary.
Advantages
► Swapping minimizes the waiting time for processes to be executed by using the swap space as an extension of RAM, allowing the CPU to keep
working efficiently without long delays due to memory limitations.
► Swapping allows the operating system to free up space in the main memory (RAM) by moving inactive or less critical data to secondary storage (like
a hard drive or SSD). This ensures that the available RAM is used for the most active processes and applications, which need it the most for optimal
performance.
► Using only single main memory, multiple process can be run by CPU using swap partition.
► It allows larger programs or applications to run on systems with limited physical memory by swapping less critical data to secondary storage and
loading the necessary parts into RAM.
► By swapping out inactive processes, the operating system can prevent the system from becoming overloaded, ensuring that the most important and
active processes have access to enough memory for smooth execution.
Disadvantages
► Risk of data loss during swapping arises because of the dependency on secondary storage for temporary data retention. If the system loses power
before this data is safely written back into RAM or saved properly, it can result in the loss of important data, files, or system states.
► The number of page faults increases as the system frequently swaps pages in and out of memory, which directly impacts performance because
fetching data from the disk (or swap space) is much slower than accessing it from RAM.
► If the system Swaps-in and out too often, the performance of system can severely decline as CPU will spend more time swapping then executing
processes.
Contiguous Memory Allocation;
► Memory Management Techniques are basic techniques that are used in managing the memory in the operating system. In this article, we will
deal with the implementation of Continuous Memory Management Techniques. Memory Management Techniques are classified broadly into two
categories:
► Contiguous
► Non-contiguous
What is Contiguous Memory Management?
► Contiguous memory allocation is a memory allocation strategy. As the name implies, we utilize this technique to assign contiguous blocks of
memory to each task. Thus, whenever a process asks to access the main memory, we allocate a continuous segment from the empty region to
the process based on its size. In this technique, memory is allotted in a continuous way to the processes. Contiguous Memory Management has
two types:
► Fixed(or Static) Partition
► Variable(or Dynamic) Partitioning

► 1. Fixed Partition Scheme


► In the fixed partition scheme, memory is divided into fixed number of partitions. Fixed means number of partitions
are fixed in the memory. In the fixed partition, in every partition only one process will be accommodated. Degree of
multi-programming is restricted by number of partitions in the memory. Maximum size of the process is restricted by
maximum size of the partition. Every partition is associated with the limit registers.
► Limit Registers: It has two limit:
► Lower Limit: Starting address of the partition.
► Upper Limit: Ending address of the partition.
Internal Fragmentation is found in fixed partition scheme. To overcome the problem of internal fragmentation, instead of
fixed partition scheme, variable partition scheme is used.
► Disadvantages Fix partition scheme
► Maximum process size <= Maximum partition size.
► The degree of multiprogramming is directly proportional to the number of
partitions.
► Internal fragmentation which is discussed above is present.
► If a process of 19kb wants to allocate and we have free space which is not
continuous we are not able to allocate the space.
► 2. Variable Partition Scheme
► In the variable partition scheme, initially memory
will be single continuous free block. Whenever the
request by the process arrives, accordingly
partition will be made in the memory. If the
smaller processes keep on coming then the larger
partitions will be made into smaller partitions.
► In variable partition schema initially, the memory
will be full contiguous free block
► Memory divided into partitions according to the
process size where process size will vary.
► One partition is allocated to each active partition.
► External Fragmentation is found in variable
partition scheme. To overcome the problem of
external fragmentation, compaction technique is
used or non-contiguous memory management
techniques are used.
► Solution of External Fragmentation
1. Compaction
► Moving all the processes toward the top or towards the bottom to make free available
memory in a single continuous place is called compaction. Compaction is undesirable to
implement because it interrupts all the running processes in the memory.
► Disadvantage of Compaction
► Page fault can occur.
► It consumes CPU time (overhead).
2. Non-contiguous memory allocation
► Physical address space: Main memory (physical memory) is divided into blocks of the same
size called frames. frame size is defined by the operating system by comparing it with the
size of the process.
► Logical Address space: Logical memory is divided into blocks of the same size called
process pages. page size is defined by hardware system and these pages are stored in the
main memory during the process in non-contiguous frames.
Advantages of Variable Partition Scheme
► Portion size = process size
► There is no internal fragmentation (which is the drawback of fixed partition schema).
► Degree of multiprogramming varies and is directly proportional to a number of processes.
Disadvantage Variable Partition Scheme
► External fragmentation is still there.
► Advantages of Contiguous Memory Management
► It’s simple to monitor how many memory blocks are still available for use,
which determines how many more processes can be allocated RAM.
► Considering that the complete file can be read from the disc in a single
session, contiguous memory allocation offers good read performance.
► Contiguous allocation is simple to set up and functions well.
► Disadvantages of Contiguous Memory Management
► Fragmentation is not a problem. Since new files can be written to the disk
after older ones.
► To select the appropriate hole size while creating a new file, it needs know its
final size.
► The extra space in the holes would need to be reduced or used once the disk
is full.

🎉 First Fit
► The first-fit algorithm searches for the first free partition that is large enough to accommodate the process. The operating system starts searching
from the beginning of the memory and allocates the first free partition that is large enough to fit the process.

► For example, suppose we have the following memory partitions:

► | 10 KB | 20 KB | 15 KB | 25 KB | 30 KB |

► Now, a process requests 18 KB of memory. The operating system starts searching from the beginning and finds the first free partition of 20 KB. It
allocates the process to that partition and keeps the remaining 2 KB as free memory.

Best Fit

► The best-fit algorithm searches for the smallest free partition that is large enough to accommodate the process. The operating system searches
the entire memory and selects the free partition that is closest in size to the process.

► For example, suppose we have the following memory partitions:

► | 10 KB | 20 KB | 15 KB | 25 KB | 30 KB |

► Now, a process requests 18 KB of memory. The operating system searches for the smallest free partition that is larger than 18 KB, and it finds the
partition of 20 KB. It allocates the process to that partition and keeps the remaining 2 KB as free memory.

Worst Fit

► The worst-fit algorithm searches for the largest free partition and allocates the process to it. This algorithm is designed to leave the largest
possible free partition for future use.

► For example, suppose we have the following memory partitions:

► | 10 KB | 20 KB | 15 KB | 25 KB | 30 KB |

► Now, a process requests 18 KB of memory. The operating system searches for the largest free partition, which is 30 KB. It allocates the process to
that partition and keeps the remaining 12 KB as free memory.
Non Contiguous:

► There are two fundamental approaches to implementing non-contiguous


memory allocation. Paging and Segmentation are the two ways that allow a
process’s physical address space to be non-contiguous. It has
the advantage of reducing memory wastage but it increases the overheads
due to address translation. It slows the execution of the memory because
time is consumed in address translation.
► Paging
► Segmentation
Paging:
► Dividing Secondary memory into pages and putting it into frames of main memory.
► Paging is a memory management scheme that eliminates the need for a contiguous allocation of physical
memory. The process of retrieving processes in the form of pages from the secondary storage into the main
memory is known as paging. The basic purpose of paging is to separate each procedure into pages.
► The mapping between logical pages and physical page frames is maintained by the page table, which is used by
the memory management unit to translate logical addresses into physical addresses. The page table maps each
logical page number to a physical page frame number. By using a Page Table, the operating system keeps track
of the mapping between logical addresses (used by programs) and physical addresses (actual locations in
memory).
Page table:
► A Page Table is a data structure used by the operating system to keep track of
the mapping between virtual addresses used by a process and the
corresponding physical addresses in the system’s memory.
► A Page Table Entry (PTE) is an entry in the Page Table that stores information
about a particular page of memory. Each PTE contains information such as the
physical address of the page in memory, whether the page is present in
memory or not, whether it is writable or not, and other access permissions.
► The size and format of a PTE can vary depending on the architecture of the
system and the operating system used. In general, a PTE contains enough
information to allow the operating system to manage memory efficiently and
protect the system from malicious or accidental access to memory.

Information Stored in Page Table Entry
► Frame Number – It gives the frame number in which the current page you are looking for is present. The
number of bits required depends on the number of frames. Frame bit is also known as address translation
bit.

Present/Absent Bit: Present or absent bit says whether a particular page you are looking for is present or
absent. In case it is not present, that is called Page Fault. It is set to 0 if the corresponding page is not in
memory. Used to control page faults by the operating system to support virtual memory. Sometimes this bit
is also known as a valid/invalid bit.
► Protection Bit: The protection bit says what kind of protection you want on that page. So, these bits are
for the protection of the page frame (read, write, etc).
► Referenced Bit: Referenced bit will say whether this page has been referred to in the last clock cycle or
not. It is set to 1 by hardware when the page is accessed.
► Caching Enabled/Disabled: Sometimes we need fresh data. Let us say the user is typing some information
from the keyboard and your program should run according to the input given by the user. In that case, the
information will come into the main memory. Therefore main memory contains the latest information
which is typed by the user. Now if you try to put that page in the cache, that cache will show the old
information. So whenever freshness is required, we don’t want to go for caching or many levels of memory.
The information present in the closest level to the CPU and the information present in the closest level to
the user might be different. So we want the information to be consistent, which means whatever
information the user has given, the CPU should be able to see it as first as possible. That is the reason we
want to disable caching. So, this bit enables or disables caching of the page.
► Modified Bit: Modified bit says whether the page has been modified or not. Modified means sometimes you
might try to write something onto the page. If a page is modified, then whenever you should replace that
page with some other page, then the modified information should be kept on the hard disk or it has to be
written back or it has to be saved back. It is set to 1 by hardware on the write-access to a page which is
used to avoid writing when swapped out. Sometimes this modified bit is also called the Dirty bit.
Numerical based on page table:
Page replacement Algorithm:

► Three Page Replacement Algorithms we are going to learn:

► 1. FIFO
► 2. Optimal Page Replacement.
► 3. Least Recently Used (LRU).
First in First Out (FIFO)
► This is the simplest page replacement algorithm. In this algorithm, the operating system
keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for
removal.
► Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3-page frames. Find the
number of page faults using FIFO Page Replacement Algorithm.
► Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —>
3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not
available in memory, so it replaces the oldest page slot i.e 1. —> 1 Page Fault. 6 comes, it
is also not available in memory, so it replaces the oldest page slot i.e 3 —> 1 Page Fault.
Finally, when 3 come it is not available, so it replaces 0 1-page fault.

Optimal Page Replacement
► In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.
► Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3
with 4-page frame. Find number of page fault using Optimal Page
Replacement Algorithm.
► Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots
—> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7
because it is not used for the longest duration of time in the future—> 1
Page fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —>
1 Page Fault.
Least Recently Used
► In this algorithm, page will be replaced which is least recently used.
► Example Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3
with 4-page frames. Find number of page faults using LRU Page Replacement
Algorithm.
► Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7
because it is least recently used —> 1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Segmentation

► A process is divided into Segments. The chunks that a program is divided into
which are not necessarily all of the exact sizes are called segments.
Segmentation gives the user’s view of the process which paging does not
provide. Here the user’s view is mapped to physical memory.
Types of Segmentation in Operating Systems
► Virtual Memory Segmentation: Each process is divided into a number of
segments, but the segmentation is not done all at once. This segmentation
may or may not take place at the run time of the program.
► Simple Segmentation: Each process is divided into a number of segments, all
of which are loaded into memory at run time, though not necessarily
contiguously.
Segmentation
Difference between Paging and Segmentation:
Self Learning Topics

► Advantages, disadvantages of paging.


► Types of paging.
► Advantages, disadvantages of segmentation.
► Copy on Write (COW)
► Thrashing.

You might also like