0% found this document useful (0 votes)
23 views

5.Memory Managment

Uploaded by

SujaChavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

5.Memory Managment

Uploaded by

SujaChavan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

5.

Memory Management
Memory is an important resource of the computer system that needs to be managed by operating
system. To execute the program, user needs to keep the program in main memory. The main
memory is volatile. Therefore, a user needs to store his program in some secondary storage
which is non-volatile.
Every process needs main memory since a process code, stack, heap (dynamically-allocated
structures), and data (variables) must all reside in memory. The management of main memory is
required to support for multiprogramming. Many executables processes exist in main memory at
any given time.

Multiprogramming
Multiprogramming is required to support multiple processes simultaneously. Since multiple
processes are resident in memory at the same time, it increases processor utilization if the
processes are I/O bound,CPU does not remain idle.
CPU utilization increases as it executes multiple processes on time sharing basis.
Multiprogramming gives illusion of running multiple processes at once and provides users with
interactive response to processes.

3GB 500mb
700mb

Explain fixed size memory partitioning. //4marks 2022


Multiprogramming with Fixed and Variable Partitions -
In this scheme memory is divided into a number of fixed-sized partitions. Each partition may
contain exactly one process. The number of partitions and its size would be defined by the
system administrator when the system startup. The degree of multiprogramming will be high if
the number of partitions created is more.
Fixed Partitioning is also known as Contiguous memory allocation. Fixed Partitioning is the
easiest method, which is used to load more than one process into the main memory.
In Fixed Partitioning, we divide the main memory into partitions, and the size of partitions can
be different or equal. In the first partition, the operating system is present, and the remaining
partitions are used to store the user processes. In a contiguous way, we allocate the memory to
the processes.
In the fixed Partitioning:
There is no overlapping of partitions.
For the process execution, the process should be present contiguously.

Advantages of Fixed Partitioning –


1. Easy to implement
2. Little OS overhead

Disadvantages of Fixed Partitioning –


Internal Fragmentation
External Fragmentation
Limit process size
700mb P1 500mb 200mb

600Mb P2 550mb 50mb

d) Define fragmentation. Explain Internal and External Fragmentation

Fragmentation :
As processes are loaded and removed from memory, the free memory space is broken into little
pieces. It happens after sometimes that processes cannot be allocated to memory blocks
considering their small size and memory blocks remains unused. This problem is known as
Fragmentation.
1.Internal Fragmentation
In a fixed size partitioning of memory,if partition size is larger than program size,some
space of the partition will be unused within that partition.This is called as Internal
Fragmentation.
2.External Fragmentation
In both fixed and variable size partitions it may happen that, partition can remain empty
even though processes are waiting in other partitions queue leading to external
fragmentation.
64mb

8mb –Os
56 mb –user 8
P1-22MB
P2 -20MB
P3-10MB
4MB-rem
Dynamic Partitioning :
Dynamic partitioning tries to overcome the problems caused by fixed partitioning.
In this technique, the partition size is not declared initially. It is declared at the time of process
loading.
The first partition is reserved for the operating system. The remaining space is divided into parts.
The size of each partition will be equal to the size of the process.
The partition size varies according to the need of the process so that the internal fragmentation
can be avoided.

Advantages of Dynamic Partitioning over fixed partitioning


1. No Internal Fragmentation
2. No Limitation on the size of the process
3. Degree of multiprogramming is dynamic

DisAdvantages :
External Fragmentation

Initially ,all memory is available for user process and is considered as one large block of
available memory a hole.
When process arrives and needs memory,it searches for hole large enough to fit for this
process.This technique ensures the efficient use of main memory.There is No Internal
Fragmentation.However,if the many small holes are scattered,it can’t be allocated to one large
size process.

Consider the following example :


Main sm
P4 -4MB P2-20MB

Initially 64 MB main memory is available. 8 MB is allocated to operating system as shown in


Fig. 5.1.4(a).
For user Processes 56 MB memory is available and considered as one large single hole.
Processes P1, P2 and P3 are loaded and to these processes sufficient memory space can be
allocated, creating the hole of size 4 MB at the end of memory (Figs. 5.1.4(b), (c) and (d)).
At some point of time none of the allocated processes are ready.
Operating system swaps out P2 and in its place new process P4 of size 18MB is swapped in
fig.5.4.1(e). The hole of 2MB is created.
Again after some time since none of the processes suspend state, is available.
So operating system swaps P1 Out and swaps P2 back in fig shown in Fig. 5.1.4(f).
The example shows that, there are many scattered small size holes. These holes cannot be
allocated to the process of larger size than these holes. This is called external fragmentation.

//Skip
Solution to this problem is compaction.
Compaction is a technique to convert many small size scattered holes into one large size
continous hole.It is relocation of the various processes so as to accumulate all the free space in
one place.Due to compaction,there is inefficient use of processor.Fig5.1.5. shows the memory
after compaction.
Fig. 5.1.5(a) shows the scattered small size holes. If processes P4 and P3 moved up, we get one
large size hole at the end of memory as shown in Fig. 5.1.5(b).

If processes P2, P4 and P3 moved downward, we get one large size hole of 8 MB at the start of
user memory area as shown in Fig. 5.1.5(c). However in this, 48 MB processes need to be moved
compared to 28 MB shown in Fig. 5.1.5(b). Fig. 5.1.5(d) shows the 28 MB shifting of processes.
Compaction should ensure the moving of processes having minimum total size. Compaction
increases the degree of multiprogramming. It is because processes can be loaded in the memory
which is available by combining many scattered holes. If relocation is done at compile time
(static), compaction is not possible. It is possible only if relocation is dynamic and done at
execution time.

List free space management techniques? Describe any one indetail.


Free space management techniques
The OS must manage the memory when allocated dynamically.
There two approaches to keep track of memory usage
1.bitmaps or bit vector [1/0]
2.linked list of free allocated memory segments.
3.Grouping
4.Counting
Following two approaches are used :

1.Bit Map

The main advantage of this approach is its relative simplicity and its efficiency in finding the
first free block or n consecutive free blocks on the disk.

Disadvantages
Searching a bitmap for a run of a given length is a slow operation

Linked List for Dynamic Partitioning


The better and the most popular approach to keep track the free or filled partitions is using
Linked List.
In this approach, the Operating system maintains a linked list where each node represents each
partition. Every node has three fields.
|flag|staring index|end index|
1. First field of the node stores a flag bit which shows whether the partition is a hole.
2. Second field stores the starting index of the partition.
3. Third filed stores the end index of the partition.
If a partition is freed at some point of time then that partition will be merged with its adjacent
free partition without doing any extra effort.
There are some points which need to be focused while using this approach.

1.The OS must be very clear about the location of the new node which is to be added in the
linked list.

2.Using a doubly linked list will make some positive effects on the performance due to the fact
that a node in the doubly link list can also keep track of its previous node.

1.
Advantages
The segment list is kept sorted by address. Sorting this way has the advantage that when a
process terminates or is swapped out, updating the list is straightforward.
Disadvantages
This is not efficient method as linked list need to be maintained.

Virtual Memory
There are many cases where entire program is not needed in main memory at a time.
In many cases even though entire program is needed it may not all be needed at the same time.
Application programs always get the feeling of availability of contiguous working address space
due to the idea of virtual memory.
Actually,this working memory can be physically fragmented and may even overflow on to disk
storage.

Paging :
In Operating Systems, Paging is a storage mechanism used to retrieve processes from the
secondary storage into the main memory in the form of pages.
The main idea behind the paging is to divide each process in the form of pages. The main
memory will also be divided in the form of frames.
One page of the process is to be stored in one of the frames of the memory. The pages can be
stored at the different locations of the memory but the priority is always to find the contiguous
frames or holes.
Pages of the process are brought into the main memory only when they are required otherwise
they reside in the secondary storage.
Different operating system defines different frame sizes. The sizes of each frame must be equal.
Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as
same as frame size.
Initially all pages remain secondary storage(backing store).When a process is needed to be
executed,its pages are loaded into any available memory frames from the secondary storage.

Paging hardware:

Following basic operations is done in paging :


CPU generated logical address and it is divided into two parts: a page number (p) and a page
offset (d).
The Page number is used as an index into a page table.
The page table contains the base address of each page in physical memory.
The combination of base address with page offset to define the physical memory address that is
sent to the memory unit.
The physical location of the data in memory is therefore at offset d in page frame f.

Because of the paging user program sights the memory as one single contiguous space,it gives
the illusion that memory contains only one program.
But user program is spread out throughout main memory(physical memory).The logical
addresses are translated into physical addresses.

Segmentation
In Operating Systems, Segmentation is a memory management technique in which, the memory
is divided into the variable size parts. Each part is known as segment which can be allocated to a
process.
The details about each segment are stored in a table called as segment table. Segment table is
stored in one (or many) of the segments.
Segment table contains mainly two information about segment:
1. Base: It is the base address of the segment
2. Limit: It is the length of the segment.
Translation of Logical address into physical address by segment table
CPU generates a logical address which contains two parts:
1. Segment Number
2. Offset
The Segment number is mapped to the segment table. The limit of the respective segment is
compared with the offset.
If the offset is less than the limit then the address is valid otherwise it throws an error as the
address is invalid.
In the case of valid address, the base address of the segment is added to the offset to get the
physical address of actual word in the main memory.

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. It is easier to relocate segments than entire address space.
4. The segment table is of lesser size as compare to the page table in paging.

Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.

Paging VS Segmentation-101%
Paging Segmentation
It divides the physical memory into frames and It divides the Computer’s physical memory
program’s address space into same size pages. and program’s address space into segments.
Page is always of fixed block size. Segment is of variable size.
The size of the page is specified by the The size of the segment is specified by the
hardware. user.
Paging is faster than segmentation Segmentation is slower than paging
Page table is used to map pages with frames Segment table is used to map segments with
from memory. physical memory.
Invisible to Programmer Visible to programmer
It suffers from internal fragmentation It suffers from external fragmentation
Demand Paging definantion-
According to the concept of Virtual Memory, in order to execute some process, only a part of the
process needs to be present in the main memory which means that only a few pages will only be
present in the main memory at any time.
However, deciding, which pages need to be kept in the main memory and which need to be kept
in the secondary memory, is going to be difficult because we cannot say in advance that a
process will require a particular page at particular time.
Therefore, to overcome this problem, there is a concept called Demand Paging.

It suggests keeping all pages of the frames in the secondary memory until they are required.
In other words, it says that do not load any page in the main memory until it is required.
Whenever any page is referred for the first time in the main memory, then that page will be
found in the secondary memory.
After that, it may or may not be present in the main memory depending upon the page
replacement algorithm.

What is a Page Fault?


If the referred page is not present in the main memory then there will be a miss and the concept
is called Page miss or page fault.
The CPU has to access the missed page from the secondary memory. If the number of page fault
is very high then the effective access time of the system will become very high.

Page Replacement Algorithms -101% confirm


The page replacement algorithm decides which memory page is to be replaced. The process of
replacement is sometimes called swap out or write to disk. Page replacement is done when the
requested page is not found in the main memory (page fault).

1.FIFO
2.LRU -theory
3.Optimal

FIFO Algorithm
The simplest page replacement algorithm and work on the basis of first in first out(FIFO).It
throws out the pages in which they were brought in.
The time is associated with each page when it was brought into main memory.
This algorithm always chooses oldest page for replacement.

Explain LRU page replacement algorithm for following reference


string. 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
Calculate the page fault.
LRU:
The Least Recently Used (LRU) page replacement policy replaces the page that has not been
used for the longest
period of time.
LRU replacement associates with each page the time of that page's last use.
When a page must be replaced, LRU chooses the page that has not been used for the longest
period of time.
The LRU policy is often used as a page-replacement algorithm and is considered to be good.
An LRU page-replacement algorithm may require substantial hardware assistance.

Counters:
In the simplest case, we associate with each page-table entry a time-of-use field and add to the
CPU a logical clock or counter.
The clock is incremented for every memory reference.
Whenever a reference to a page is made, the contents of the clock register are copied to the time-
of-use field in the page table entry for that page.
In this way, we always have the "time" of the last reference to each page. We replace the page
with the smallest time value.

Stack:
Another approach to implementing LRU replacement is to keep a stack of page numbers.
Whenever a page is referenced, it is removed from the stack and put on the top.
In this way, the most recently used page is always at the top of the stack and the least recently
used page is always at the
bottom.
H
Reference String: 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
(Frame size have not mentioned in question so assume frame size as 3
or 4) H
70120304230321201701
FIFO First in replace

RF 7 0 1 2 3 0 4 2 3 0 1 2
7 0 1
F1 7 7 7 2 2 2 4 4 4 0 0 0
7 7 7
F2 0 0 0 3 3 3 2 2 2 1 1
1 0 0
F3 1 1 1 0 0 0 3 3 3 2
2 2 1
PF F F F F F F F F F F F F
F F F

FIFO
F1 7 7 7 2 2 2 4 4 4 0 0 0
7 7 7
F2 0 0 0 3 3 3 2 2 2 1 1
1 0 0
F3 1 1 1 0 0 0 3 3 3 2
2 2 1
PF * * * * * * * * * * * *
* * *

Page Fault=15

7 0 1 2 0-H 3 0-H 4 2 3 0 3-H 2-H 1 2-H 0 1 7 0 1


<----------
Left
LRU
Replace all least recently use page in frame

RP 7 0 1 2 3 4 2 3 0 1 0 7
F1 7 7 7 2 2 4 4 4 0 1 1 1
F2 0 0 0 0 0 0 3 3 3 0 0
F3 1 1 3 3 2 2 2 2 2 7
PF F F F F F F F F F F F F

f1 7 7 7 2 2 4 4 4 0 1 1 1
f2 0 0 0 0 0 0 3 3 3 0 0
f3 1 1 3 3 2 2 2 2 2 7
Pf * * * * * * * * * * * *

Page Fault=12

--------- last
70120304230321201701
Optimal -Replace the page which Is not used in longest (Dimension of time in future)

Right --------- last


70120304230321201701

RF 7 0 1 2 3 4 0 1 7
F1 7 7 7 2 2 2 2 2 7
F2 0 0 0 0 4 0 0 0
F3 1 1 3 3 3 1 1
PF F F F F F F F F F

RF 7 0 1 2
F1 7 7 7 2 2 2 2 2 7
F2 0 0 0 0 4 0 0 0
F3 1 1 3 3 3 1 1
PF * * * * * * * * *

Page Fault=9

Find out the total number of page faults using –


i) Least recently used page replacement
ii) Optimal page replacement
Page replacement algorithms of memory management, if the page are coming in the order
70120304230321201701

For the page reference string


7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1
Calculate the page faults applying.
i) Optimal
ii) LRU
iii) FiFo page
Replacement algorithms for a memory with three frames.

Given a page reference string with three (03) page frames. Calculate the page faults with
‘Optimal’ and ‘LRU’ page replacement algorithm respectively.
‘7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1 (Representation of frame can be in any order) i) Optimal
Fifo –first -replace
Optimal –least past
LRU -future

You might also like