Memory allocation-UNIT-4
Memory allocation-UNIT-4
Memory allocation:
Here, in this diagram 40 KB memory block is the first available free hole that
can store process A (size of 25 KB), because the first two blocks did not
have sufficient memory space.
Best fit:-
In the best fit, allocate the smallest hole that is big enough to process
requirements. For this, we search the entire list, unless the list is ordered by
size.
2
Here in this example, first, we traverse the complete list and find the last hole
25KB is the best suitable hole for Process A(size 25KB).
In this method memory utilization is maximum as compared to other memory
allocation techniques.
Worst fit:-In the worst fit, allocate the largest available hole to process. This
method produces the largest leftover hole.
Working
In the Shared Memory system, the cooperating processes communicate, to
exchange the data with each other. Because of this, the cooperating processes
establish a shared region in their memory. The processes share data by reading and
writing the data in the shared segment of the processes.
3
Let the two cooperating processes P1 and P2. Both the processes P1 and P2, have
their different address spaces. Now let us assume, P1 wants to share some data
with P2.
So, P1 and P2 will have to perform the following steps −
Step 1 − Process P1 has some data to share with process P2. First P1 takes
initiative and establishes a shared memory region in its own address space and
stores the data or information to be shared in its shared memory region.
Step 2 − Now, P2 requires the information stored in the shared segment of P1. So,
process P2 needs to attach itself to the shared address space of P1. Now, P2 can
read out the data from there.
Step 3 − The two processes can exchange information by reading and writing data
in the shared segment of the process.
Advantages
The advantages of Shared Memory are as follows −
Shared memory is a faster inter process communication system.
It allows cooperating processes to access the same pieces of data
concurrently.
It speeds up the computation power of the system and divides long tasks into
smaller sub-tasks and can be executed in parallel.
Modularity is achieved in a shared memory system.
Users can perform multiple tasks at a time.
Partition Allocation
Memory is divided into different blocks or partitions. Each process is
allocated according to the requirement. Partition allocation is an ideal
method to avoid internal fragmentation.
First Fit: In this type fit, the partition is allocated, which is the first
sufficient block from the beginning of the main memory.
Best Fit: It allocates the process to the partition that is the first
smallest partition among the free partitions.
Worst Fit: It allocates the process to the partition, which is the
largest sufficient freely available partition in the main memory.
Next Fit: It is mostly similar to the first Fit, but this Fit, searches for
the first sufficient partition from the last allocation point.
What is Paging?
Paging is a storage mechanism that allows OS to retrieve processes
from the secondary storage into the main memory in the form of pages.
In the Paging method, the main memory is divided into small fixed-size
blocks of physical memory, which is called frames. The size of a frame
should be kept the same as that of a page to have maximum utilization
of the main memory and to avoid external fragmentation. Paging is used
for faster access to data, and it is a logical concept.
What is Fragmentation?
Processes are stored and removed from memory, which creates free
memory space, which are too small to use by other processes.
1. External fragmentation
2. Internal fragmentation
The details about each segment are stored in a table called a segment table.
Segment table is stored in one (or many) of the segments.
Operating system doesn't care about the User's view of the process. It may divide the
same function into different pages and those pages may or may not be loaded at the
same time into the memory. It decreases the efficiency of the system.
It is better to have segmentation which divides the process into the segments. Each
segment contains the same type of functions such as the main function can be
included in one segment and the library functions can be included in the other
segment.
6
1. Segment Number
2. Offset
For Example:
Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for
the segment offset so the maximum segment size is 4096 and the maximum number
of segments that can be refereed is 16.
When a program is loaded into memory, the segmentation system tries to locate
space that is large enough to hold the first segment of the process, space
information is obtained from the free list maintained by memory manager. Then it
tries to locate space for other segments. Once adequate space is located for all the
segments, it loads them into their respective areas.
The operating system also generates a segment map table for each program.
7
With the help of segment map tables and hardware assistance, the operating system
can easily translate a logical address into physical address on execution of a
program.
The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the offset. If the offset is less than the limit then the
address is valid otherwise it throws an error as the address is invalid.
In the case of valid addresses, the base address of the segment is added to the offset
to get the physical address of the actual word in the main memory.
The above figure shows how address translation is done in case of segmentation.
Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
8
Virtual Memory is a space where large programs can store themselves in form of
pages while their execution and only the required pages or portions of processes are
loaded into the main memory. This technique is useful as a large virtual memory is
provided for user programs when a very small physical memory is there. Thus Virtual
memory is a technique that allows the execution of processes that are not in the
physical memory completely.
Virtual Memory mainly gives the illusion of more physical memory than there really is
with the help of Demand Paging.
In real scenarios, most processes never need all their pages at once, for the following
reasons :
Error handling code is not needed unless that specific error occurs, some of
which are quite rare.
Arrays are often over-sized for worst-case scenarios, and only a small fraction
of the arrays are actually used in practice.
Certain features of certain programs are rarely used.
In an Operating system, the memory is usually stored in the form of units that are
known as pages. Basically, these are atomic units used to store large programs.
Virtual memory can be implemented with the help of:-
1. Demand Paging
2. Demand Segmentation
Following are the reasons due to which there is a need for Virtual Memory:
With the help of the Operating system few pieces of the program are brought into
the main memory:
A portion of the process that is brought in the main memory is known as Resident
Set.
Whenever an address is needed that is not in the main memory, then it generates an
interrupt. The process is placed in the blocked state by the Operating system. Those
pieces of the process that contains the logical address are brought into the main
memory.
Demand Paging
The basic idea behind demand paging is that when a process is swapped in, its pages
are not swapped in all at once. Rather they are swapped in only when the process
needs them(On-demand). Initially, only those pages are loaded which will be
required by the process immediately.
The pages that are not moved into the memory, are marked as invalid in the page
table. For an invalid entry, the rest of the table is empty. In the case of pages that are
loaded in the memory, they are marked as valid along with the information about
where to find the swapped out page.
Page Replacement
As studied in Demand Paging, only certain pages of a process are loaded initially into
the memory. This allows us to get more number of processes into memory at the
same time. but what happens when a process requests for more pages and no free
memory is available to bring them in. Following steps can be taken to deal with this
problem :
10
1. Put the process in the wait queue, until any other process finishes its
execution thereby freeing frames.
2. Or, remove some other process completely from the memory to free frames.
3. Or, find some pages that are not being used right now, move them to the disk
to get free frames. This technique is called Page replacement and is most
commonly used. We have some great algorithms to carry on page
replacement efficiently.
Thrashing
A process that is spending more time paging than executing is said to be thrashing.
In other words, it means, that the process doesn't have enough frames to hold all the
pages for its execution, so it is swapping pages in and out very frequently to keep
executing. Sometimes, the pages which will be required in the near future have to be
swapped out.
Initially, when the CPU utilization is low, the process scheduling mechanism, to
increase the level of multiprogramming loads multiple processes into the memory at
the same time, allocating a limited amount of frames to each process. As the
memory fills up, the process starts to spend a lot of time for the required pages to be
swapped in, again leading to low CPU utilization because most of the processes are
waiting for pages. Hence the scheduler loads more processes to increase CPU
utilization, as this continues at a point of time the complete system comes to a stop.
Demand Paging
A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not
in advance. When a context switch occurs, the operating system does not copy any
of the old program’s pages out to the disk or any of the new program’s pages into the
main memory Instead, it just begins executing the new program after loading the first
page and fetches that program’s pages as they are referenced.
13
While executing a program, if the program references a page which is not available
in the main memory because it was swapped out a little ago, the processor treats
this invalid memory reference as a page fault and transfers control from the program
to the operating system to demand the page back into the memory.
Advantages
Following are the advantages of Demand Paging −
Reference String
The string of memory references is called reference string. Reference strings are
generated artificially or by tracing a given system and recording the address of each
memory reference. The latter choice produces a large number of data, where we
note two things.
For a given page size, we need to consider only the page number, not the
entire address.
If we have a reference to a page p, then any immediately following references
to page p will never cause a page fault. Page p will be in memory after the first
reference; the immediately following references will not fault.
For example, consider the following sequence of addresses −
123,215,600,1234,76,96
If page size is 100, then the reference string is 1,2,6,12,0,0.