0% found this document useful (0 votes)
23 views

Memory allocation-UNIT-4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Memory allocation-UNIT-4

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

1

Memory allocation:

To gain proper memory utilization, memory allocation must be allocated


efficient manner. One of the simplest methods for allocating memory is to
divide memory into several fixed-sized partitions and each partition contains
exactly one process. Thus, the degree of multiprogramming is obtained by
the number of partitions.
Multiple partition allocation: In this method, a process is selected from the
input queue and loaded into the free partition. When the process terminates,
the partition becomes available for other processes.
Fixed partition allocation: In this method, the operating system maintains a
table that indicates which parts of memory are available and which are
occupied by processes. Initially, all memory is available for user processes
and is considered one large block of available memory. This available
memory is known as a “Hole”. When the process arrives and needs memory,
we search for a hole that is large enough to store this process. If the
requirement is fulfilled then we allocate memory to process, otherwise
keeping the rest available to satisfy future requests. While allocating a
memory sometimes dynamic storage allocation problems occur, which
concerns how to satisfy a request of size n from a list of free holes. There
are some solutions to this problem:
First fit:-
In the first fit, the first available free hole fulfills the requirement of the
process allocated.

Here, in this diagram 40 KB memory block is the first available free hole that
can store process A (size of 25 KB), because the first two blocks did not
have sufficient memory space.
Best fit:-
In the best fit, allocate the smallest hole that is big enough to process
requirements. For this, we search the entire list, unless the list is ordered by
size.
2

Here in this example, first, we traverse the complete list and find the last hole
25KB is the best suitable hole for Process A(size 25KB).
In this method memory utilization is maximum as compared to other memory
allocation techniques.
Worst fit:-In the worst fit, allocate the largest available hole to process. This
method produces the largest leftover hole.

Here in this example, Process A (Size 25 KB) is allocated to the largest


available memory block which is 60KB. Inefficient memory utilization is a
major issue in the worst fit.

What is shared memory in the OS?

Shared memory system is the fundamental model of inter process communication.


In a shared memory system, in the address space region the cooperating
communicate with each other by establishing the shared memory region.
Shared memory concept works on fastest inter process communication.
If the process wants to initiate the communication and it has some data to share,
then establish the shared memory region in its address space. After that, another
process wants to communicate and tries to read the shared data, and must attach
itself to the initiating process’s shared address space.
Let us see the working condition of the shared memory system step by step.

Working
In the Shared Memory system, the cooperating processes communicate, to
exchange the data with each other. Because of this, the cooperating processes
establish a shared region in their memory. The processes share data by reading and
writing the data in the shared segment of the processes.
3

Let us discuss it by considering two processes. The diagram is shown below −

Let the two cooperating processes P1 and P2. Both the processes P1 and P2, have
their different address spaces. Now let us assume, P1 wants to share some data
with P2.
So, P1 and P2 will have to perform the following steps −
Step 1 − Process P1 has some data to share with process P2. First P1 takes
initiative and establishes a shared memory region in its own address space and
stores the data or information to be shared in its shared memory region.
Step 2 − Now, P2 requires the information stored in the shared segment of P1. So,
process P2 needs to attach itself to the shared address space of P1. Now, P2 can
read out the data from there.
Step 3 − The two processes can exchange information by reading and writing data
in the shared segment of the process.

Advantages
The advantages of Shared Memory are as follows −
 Shared memory is a faster inter process communication system.
 It allows cooperating processes to access the same pieces of data
concurrently.
 It speeds up the computation power of the system and divides long tasks into
smaller sub-tasks and can be executed in parallel.
 Modularity is achieved in a shared memory system.
 Users can perform multiple tasks at a time.

What is Memory allocation?


Memory allocation is a process by which computer programs are
assigned memory or space.

Here, main memory is divided into two types of partitions

1. Low Memory – Operating system resides in this type of memory.


2. High Memory– User processes are held in high memory.
4

Partition Allocation
Memory is divided into different blocks or partitions. Each process is
allocated according to the requirement. Partition allocation is an ideal
method to avoid internal fragmentation.

Below are the various partition allocation schemes :

 First Fit: In this type fit, the partition is allocated, which is the first
sufficient block from the beginning of the main memory.
 Best Fit: It allocates the process to the partition that is the first
smallest partition among the free partitions.
 Worst Fit: It allocates the process to the partition, which is the
largest sufficient freely available partition in the main memory.
 Next Fit: It is mostly similar to the first Fit, but this Fit, searches for
the first sufficient partition from the last allocation point.

What is Paging?
Paging is a storage mechanism that allows OS to retrieve processes
from the secondary storage into the main memory in the form of pages.
In the Paging method, the main memory is divided into small fixed-size
blocks of physical memory, which is called frames. The size of a frame
should be kept the same as that of a page to have maximum utilization
of the main memory and to avoid external fragmentation. Paging is used
for faster access to data, and it is a logical concept.

What is Fragmentation?
Processes are stored and removed from memory, which creates free
memory space, which are too small to use by other processes.

After sometimes, that processes not able to allocate to memory blocks


because its small size and memory blocks always remain unused is
called fragmentation. This type of problem happens during a dynamic
memory allocation system when free blocks are quite small, so it is not
able to fulfill any request.

Two types of Fragmentation methods are:

1. External fragmentation
2. Internal fragmentation

 External fragmentation can be reduced by rearranging memory


contents to place all free memory together in a single block.
5

 The internal fragmentation can be reduced by assigning the


smallest partition, which is still good enough to carry the entire
process.

Segmentation in OS (Operating System)


In Operating Systems, Segmentation is a memory management technique in which
the memory is divided into the variable size parts. Each part is known as a segment
which can be allocated to a process.

The details about each segment are stored in a table called a segment table.
Segment table is stored in one (or many) of the segments.

Segment table contains mainly two information about segment:

1. Base: It is the base address of the segment


2. Limit: It is the length of the segment.

Why Segmentation is required?


Till now, we were using Paging as our main memory management technique. Paging
is more close to the Operating system rather than the User. It divides all the
processes into the form of pages regardless of the fact that a process can have some
relative parts of functions which need to be loaded in the same page.

Operating system doesn't care about the User's view of the process. It may divide the
same function into different pages and those pages may or may not be loaded at the
same time into the memory. It decreases the efficiency of the system.

It is better to have segmentation which divides the process into the segments. Each
segment contains the same type of functions such as the main function can be
included in one segment and the library functions can be included in the other
segment.
6

Translation of Logical address into physical address


by segment table
CPU generates a logical address which contains two parts:

1. Segment Number
2. Offset

For Example:

Suppose a 16 bit address is used with 4 bits for the segment number and 12 bits for
the segment offset so the maximum segment size is 4096 and the maximum number
of segments that can be refereed is 16.

When a program is loaded into memory, the segmentation system tries to locate
space that is large enough to hold the first segment of the process, space
information is obtained from the free list maintained by memory manager. Then it
tries to locate space for other segments. Once adequate space is located for all the
segments, it loads them into their respective areas.

The operating system also generates a segment map table for each program.
7

With the help of segment map tables and hardware assistance, the operating system
can easily translate a logical address into physical address on execution of a
program.

The Segment number is mapped to the segment table. The limit of the respective
segment is compared with the offset. If the offset is less than the limit then the
address is valid otherwise it throws an error as the address is invalid.

In the case of valid addresses, the base address of the segment is added to the offset
to get the physical address of the actual word in the main memory.

The above figure shows how address translation is done in case of segmentation.

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.

Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
8

3. Costly memory management algorithms.

What is Virtual Memory?

In this tutorial, we will be covering the concept of Virtual Memory in an Operating


System.

Virtual Memory is a space where large programs can store themselves in form of
pages while their execution and only the required pages or portions of processes are
loaded into the main memory. This technique is useful as a large virtual memory is
provided for user programs when a very small physical memory is there. Thus Virtual
memory is a technique that allows the execution of processes that are not in the
physical memory completely.

Virtual Memory mainly gives the illusion of more physical memory than there really is
with the help of Demand Paging.

In real scenarios, most processes never need all their pages at once, for the following
reasons :

 Error handling code is not needed unless that specific error occurs, some of
which are quite rare.
 Arrays are often over-sized for worst-case scenarios, and only a small fraction
of the arrays are actually used in practice.
 Certain features of certain programs are rarely used.

In an Operating system, the memory is usually stored in the form of units that are
known as pages. Basically, these are atomic units used to store large programs.
Virtual memory can be implemented with the help of:-

1. Demand Paging
2. Demand Segmentation

Need of Virtual Memory

Following are the reasons due to which there is a need for Virtual Memory:

 In case, if a computer running the Windows operating system needs more


memory or RAM than the memory installed in the system then it uses a small
portion of the hard drive for this purpose.
 Suppose there is a situation when your computer does not have space in the
physical memory, then it writes things that it needs to remember into the hard
disk in a swap file and that as virtual memory.
9

Benefits of having Virtual Memory

1. Large programs can be written, as the virtual space available is huge


compared to physical memory.
2. Less I/O required leads to faster and easy swapping of processes.
3. More physical memory available, as programs are stored on virtual memory,
so they occupy very less space on actual physical memory.
4. Therefore, the Logical address space can be much more larger than that of
physical address space.
5. Virtual memory allows address spaces to be shared by several processes.
6. During the process creation, virtual memory allows: copy-on-
write and Memory-mapped files

Execution of the Program in the Virtual memory

With the help of the Operating system few pieces of the program are brought into
the main memory:

 A portion of the process that is brought in the main memory is known as Resident
Set.

Whenever an address is needed that is not in the main memory, then it generates an
interrupt. The process is placed in the blocked state by the Operating system. Those
pieces of the process that contains the logical address are brought into the main
memory.

Demand Paging

The basic idea behind demand paging is that when a process is swapped in, its pages
are not swapped in all at once. Rather they are swapped in only when the process
needs them(On-demand). Initially, only those pages are loaded which will be
required by the process immediately.

The pages that are not moved into the memory, are marked as invalid in the page
table. For an invalid entry, the rest of the table is empty. In the case of pages that are
loaded in the memory, they are marked as valid along with the information about
where to find the swapped out page.

Page Replacement

As studied in Demand Paging, only certain pages of a process are loaded initially into
the memory. This allows us to get more number of processes into memory at the
same time. but what happens when a process requests for more pages and no free
memory is available to bring them in. Following steps can be taken to deal with this
problem :
10

1. Put the process in the wait queue, until any other process finishes its
execution thereby freeing frames.
2. Or, remove some other process completely from the memory to free frames.
3. Or, find some pages that are not being used right now, move them to the disk
to get free frames. This technique is called Page replacement and is most
commonly used. We have some great algorithms to carry on page
replacement efficiently.

Thrashing

A process that is spending more time paging than executing is said to be thrashing.
In other words, it means, that the process doesn't have enough frames to hold all the
pages for its execution, so it is swapping pages in and out very frequently to keep
executing. Sometimes, the pages which will be required in the near future have to be
swapped out.

Initially, when the CPU utilization is low, the process scheduling mechanism, to
increase the level of multiprogramming loads multiple processes into the memory at
the same time, allocating a limited amount of frames to each process. As the
memory fills up, the process starts to spend a lot of time for the required pages to be
swapped in, again leading to low CPU utilization because most of the processes are
waiting for pages. Hence the scheduler loads more processes to increase CPU
utilization, as this continues at a point of time the complete system comes to a stop.

Advantages of Virtual Memory

 Virtual Memory allows you to run more applications at a time.


 With the help of virtual memory, you can easily fit many large programs into
smaller programs.
 With the help of Virtual memory, a multiprogramming environment can be
easily implemented.
 As more processes should be maintained in the main memory which leads to
the effective utilization of the CPU.
 Data should be read from disk at the time when required.
 Common data can be shared easily between memory.
 With the help of virtual memory, speed is gained when only a particular
segment of the program is required for the execution of the program.
 The process may even become larger than all of the physical memory.

Disadvantages of Virtual Memory

Given below are the drawbacks of using Virtual Memory:

 Virtual memory reduces the stability of the system.


 The performance of Virtual memory is not as good as that of RAM.
11

 If a system is using virtual memory then applications may run slower.


 Virtual memory negatively affects the overall performance of a system.
 Virtual memory occupies the storage space, which might be otherwise used
for long term data storage.
 This memory takes more time in order to switch between applications.

Operating System - Virtual Memory


A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than
physical memory. Virtual memory serves two purposes. First, it allows us to extend
the use of physical memory by using disk. Second, it allows us to have memory
protection, because each virtual address is translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in
main memory.
 User written error handling routines are used only when an error occurred in
the data or computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even though only
a small amount of the table is actually used.
 The ability to execute a program that is only partially in memory would counter
many benefits.
 Less number of I/O would be needed to load or swap each user program into
memory.
 A program would no longer be constrained by the amount of physical memory
that is available.
 Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.
Modern microprocessors intended for general-purpose use, a memory management
unit, or MMU, is built into the hardware. The MMU's job is to translate virtual
addresses into physical addresses. A basic example is given below −
12

Virtual memory is commonly implemented by demand paging. It can also be


implemented in a segmentation system. Demand segmentation can also be used to
provide virtual memory.

Demand Paging
A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not
in advance. When a context switch occurs, the operating system does not copy any
of the old program’s pages out to the disk or any of the new program’s pages into the
main memory Instead, it just begins executing the new program after loading the first
page and fetches that program’s pages as they are referenced.
13

While executing a program, if the program references a page which is not available
in the main memory because it was swapped out a little ago, the processor treats
this invalid memory reference as a page fault and transfers control from the program
to the operating system to demand the page back into the memory.
Advantages
Following are the advantages of Demand Paging −

 Large virtual memory.


 More efficient use of memory.
 There is no limit on degree of multiprogramming.
Disadvantages
 Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.

Page Replacement Algorithm


Page replacement algorithms are the techniques using which an Operating System
decides which memory pages to swap out, write to disk when a page of memory
needs to be allocated. Paging happens whenever a page fault occurs and a free
page cannot be used for allocation purpose accounting to reason that pages are not
available or the number of free pages is lower than required pages.
When the page that was selected for replacement and was paged out, is referenced
again, it has to read in from disk, and this requires for I/O completion. This process
determines the quality of the page replacement algorithm: the lesser the time waiting
for page-ins, the better is the algorithm.
A page replacement algorithm looks at the limited information about accessing the
pages provided by hardware, and tries to select which pages should be replaced to
minimize the total number of page misses, while balancing it with the costs of
primary storage and processor time of the algorithm itself. There are many different
page replacement algorithms. We evaluate an algorithm by running it on a particular
string of memory reference and computing the number of page faults,
14

Reference String
The string of memory references is called reference string. Reference strings are
generated artificially or by tracing a given system and recording the address of each
memory reference. The latter choice produces a large number of data, where we
note two things.
 For a given page size, we need to consider only the page number, not the
entire address.
 If we have a reference to a page p, then any immediately following references
to page p will never cause a page fault. Page p will be in memory after the first
reference; the immediately following references will not fault.
 For example, consider the following sequence of addresses −
123,215,600,1234,76,96
 If page size is 100, then the reference string is 1,2,6,12,0,0.

You might also like