0% found this document useful (0 votes)
5 views

CSC 310

The document discusses advanced memory management in computer systems, emphasizing the importance of main memory, memory management techniques, and the challenges of fragmentation and deadlock. It outlines various memory allocation methods such as first fit, best fit, and worst fit, as well as the concepts of paging and swapping. Additionally, it explains the implications of internal and external fragmentation and introduces the concept of deadlock in resource allocation among processes.

Uploaded by

osarienjoseph
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

CSC 310

The document discusses advanced memory management in computer systems, emphasizing the importance of main memory, memory management techniques, and the challenges of fragmentation and deadlock. It outlines various memory allocation methods such as first fit, best fit, and worst fit, as well as the concepts of paging and swapping. Additionally, it explains the implications of internal and external fragmentation and introduces the concept of deadlock in resource allocation among processes.

Uploaded by

osarienjoseph
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

CSC 310

Advanced Memory Management


The term Memory can be defined as a collection of data in a specific format. It is used to store
instructions and process data. The memory comprises a large array or group of words or bytes,
each with its own location. The primary motive of a computer system is to execute programs.
These programs, along with the information they access, should be in the main memory during
execution. The CPU fetches instructions from memory according to the value of the program
counter.
To achieve a degree of multiprogramming and proper utilization of memory, memory
management is important. Many memory management methods exist, reflecting various
approaches, and the effectiveness of each algorithm depends on the situation.

What is Main Memory?

The main memory is central to the operations of a modern computer. Main Memory is a large
array of words or bytes ranging in size from hundreds of thousands to billions. Main memory
is a repository of rapidly available information shared by the CPU and I/O devices. Main
memory is the place where programs and information are kept when the processor is
effectively utilizing them. Main memory is associated with the processor, so moving
instructions and information into and out of the processor is extremely fast. Main memory is
also known as RAM (Random Access Memory). This memory is a volatile memory. RAM
loses its data when a power interruption occurs.

Memory Management
In a multiprogramming computer, the operating system resides in a part of memory and the rest
is used by multiple processes. The task of subdividing the memory among different processes
is called memory management. Memory management is a method in the operating system to
manage operations between main memory and disk during process execution. The main aim of
memory management is to achieve efficient utilization of memory.
It can also be defined as the process of controlling and coordinating computer memory, assigning
portions known as blocks to various running programs to optimize the overall performance of the
system.
It is the most important function of an operating system that manages primary memory. It helps
processes to move back and forward between the main memory and execution disk. It helps
operating system to keep track of every memory location, irrespective of whether it is allocated
to some process or it remains free.
Why use Memory Management?

 It allows you to check how much memory needs to be allocated to processes that decide
which processor should get memory at what time.
 Tracks whenever inventory gets freed or unallocated. According to it will update the
status.
 It allocates the space to application routines.
 It also makes sure that applications do not interfere with each other.
 Helps protect different processes from each other
 It places the programs in memory so that memory is utilized to its full extent.
 Allocates and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.

Memory Management Techniques

Memory Allocation
To gain proper memory utilization, memory must be allocated in an efficient manner. One of
the simplest methods for allocating memory is to divide memory into several fixed-sized
partitions and each partition contains exactly one process. Thus, the degree of
multiprogramming is obtained by the number of partitions.
Multiple Partition Allocation: In this method, a process is selected from the input queue and
loaded into the free partition. When the process terminates, the partition becomes available for
other processes.

Fixed Partition Allocation: In this method, the operating system maintains a table that
indicates which parts of memory are available and which are occupied by processes. Initially,
all memory are available for user processes and are considered one large block of available
memory. This available memory is known as “Hole”. When the process arrives and needs
memory, we search for a hole that is large enough to store this process. If the requirement is
fulfilled then we allocate memory to process. Otherwise, we keep the rest available to satisfy
future requests. While allocating a memory sometimes dynamic storage allocation problems
occur which concerns how to satisfy a request of size n from a list of free holes. There are
some solutions to this problem:

First Fit
In the first fit, the first available free hole that fulfills the requirement of the process allocated.
First Fit Algorithm
1- Input memory blocks with size and processes with size.
2- Initialize all memory blocks as free.
3- Start by picking each process and check if it can
be assigned to current block.
4- If size-of-process <= size-of-block if yes then
assign and check for next process.
5- If not then keep checking the other blocks.

Best Fit
In the best fit, it allocates the process to the smallest hole that is big enough to process its
requirements. For this, we search the entire list, unless the list is ordered by size. In this
method memory utilization is maximum as compared to other memory allocation techniques.
Best Fit Algorithm
1- Input memory blocks and processes with sizes.
2- Initialize all memory blocks as free.
3- Start by picking each process and find the
minimum block size that can be assigned to
current process i.e., find min(bockSize[1],
blockSize[2],.....blockSize[n]) >
processSize[current], if found then assign
it to the current process.
5- If not then leave that process and keep checking
the other processes.

Worst Fit

It allocates the process to the partition, which is the largest sufficient freely available partition in
the main memory. Inefficient memory utilization is a major issue in the worst fit.

Worst Fit Algorithm

1- Input memory blocks and processes with sizes.


2- Initialize all memory blocks as free.
3- Start by picking each process and find the
maximum block size that can be assigned to
current process i.e., find max(bockSize[1],
blockSize[2],.....blockSize[n]) >
processSize[current], if found then assign
it to the current process.
5- If not then leave that process and keep checking
the other processes.

Single Contiguous Allocation


The main memory should oblige both the operating system and the different client processes.
Therefore, the allocation of memory becomes an important task in the operating system. The
memory is usually divided into two partitions: one for the resident operating system and one
for the user processes. We normally need several user processes to reside in memory
simultaneously. Therefore, we need to consider how to allocate available memory to the
processes that are in the input queue waiting to be brought into memory. In adjacent memory
allotment, each process is contained in a single contiguous segment of memory. For example,
MS-DOS operating system allocates memory in this way. An embedded system also runs on a
single application.

Paged Memory Management


Paging is a memory management mechanism that allows operating system to retrieve processes
from the secondary storage into the main memory in the form of pages. In the paging method,
the main memory is divided into small fixed-size blocks of physical memory, which is called
frames. The size of a frame should be kept the same as that of a page to have maximum
utilization of the main memory and to avoid external fragmentation. Paging is used for faster
access to data, and it is a logical concept. Paging eliminates the need for contiguous allocation of
physical memory. This scheme permits the physical address space of a process to be non-
contiguous. This method divides the computer’s main memory into fixed-size units known as
page frames. This hardware memory management unit maps pages into frames which should be
allocated on a page basis.

What is Swapping?

Before a process is executed it must have resided in memory. Swapping is a process of swapping
a process temporarily into a secondary memory from the main memory, which is fast as
compared to secondary memory. It will be later brought back into the memory to continue
execution.

A swapping allows more processes to be run and can be fit into memory at one time. Swapping
is also known as roll-out, roll in because if a higher priority process arrives and wants service,
the memory manager can swap out the lower priority process and then load and execute the
higher priority process. After finishing higher priority work, the lower priority process swapped
back in memory and continued to the execution process.

Benefits of Swapping
Here are major benefits/pros of swapping:

 It offers a higher degree of multiprogramming.


 Allows dynamic relocation. For example, if address binding at execution time is being
used, then processes can be swapped in different locations. Else, in case of compile and
load time bindings, processes should be moved to the same location.
 It helps to get better utilization of memory.
 Minimum wastage of CPU time on completion so it can easily be applied to a priority-
based scheduling method to improve its performance.

What is Fragmentation?
Processes are stored and removed from memory which creates free memory space or hole, and
which are too small to be used by other processes. After sometime, that processes are not able to
get memory blocks allocated to them because of the small size of the memory blocks, and these
memory blocks that always remain unused are called fragments. When free blocks are quite
small, they are not able to fulfill any request. Therefore, Fragmentation is defined as when the
process is loaded and removed after execution from memory, it creates a small free hole. These
holes cannot be assigned to new processes because holes are not combined or do not fulfill the
memory requirement of the process. To achieve a degree of multiprogramming, we must reduce
the waste of memory or fragmentation problem. In operating system two types of fragmentation

Two types of Fragmentation methods are:

1. Internal fragmentation
2. External fragmentation

Internal Fragmentation:

Internal fragmentation occurs when memory blocks are allocated to the process more than their
requested size. Due to this, some unused space is leftover and create an internal fragmentation
problem.

Example: Suppose there is a fixed partitioning that is used for memory allocation and then
different sizes of blocks 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size
2MB comes and demand for the block of memory. It gets a memory block of 3MB but 1MB
block memory is a waste, and it cannot be allocated to other processes too. This is called internal
fragmentation.

External Fragmentation:

In external fragmentation, we have a free memory block, but we cannot assign it to process
because the blocks are not contiguous.

Example: Suppose (consider above example) three processes p1, p2, p3 come with size 2MB,
4MB, and 7MB respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB
allocated respectively. After allocation, process p1 and process p2 left 1MB and 2MB
respectively. Suppose a new process p4 comes and demands a 3MB block of memory, which is
available, but we cannot assign it because the free memory space is not contiguous. This is
called external fragmentation.

Both the first fit and best-fit systems for memory allocation are affected by external
fragmentation. To overcome the external fragmentation problem, compaction is used. In the
compaction technique, all free memory spaces combine, and makes one large block. So, this
space can be used by other processes effectively.

 Internal fragmentation can be reduced by assigning the smallest partition, which is still
good enough to carry the entire process.

 External fragmentation can be reduced by rearranging memory contents to place all free
memory together in a single block (compaction).

Assignment
Write a short note on segmentation

Deadlock Detection and Prevention


Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource acquired by some other process. This means that each
member of the set is waiting for a resource that can be released only by a deadlocked process.
The resources may either be physical or logical. Examples of physical resources are printers, tape
drivers, memory space, and central processing unit CPU cycles while logical resources are files,
semaphores and, monitors. Problems arise when resources have to be allocated to different
processes. The problem of whether to allocate immediately or to wait. Major problems occur
when a set of processes are waiting for the same resource. This causes deadlock. Therefore,
deadlock means a situation whereby processes are blocked as a result of a set of processes
acquiring resources needed for its completion, and waiting for the other processes in the set to
release their own resources. This means that two processes are unknowingly waiting for
resources held by each other, and thus not available.
Consider an example when two trains are coming toward each other on the same track and there
is only one track, none of the trains can move once they are in front of each other. A similar
situation occurs in operating systems when there are two or more processes that hold some
resources and wait for resources held by other(s). For example, in the below diagram, Process 1
is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is
waiting for resource 1.

You might also like