0% found this document useful (0 votes)
31 views

unit 5(22516)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

unit 5(22516)

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Operating System(22516)

Unit-5 Memory Management

Marks: 14

Prepared by:
Mrs. Kousar Ayub A.
Lecturer(Selection Grade)
Computer Engg. Dept
M. H. Saboo Siddik Polytechnic

1
Learning Outcomes

The learners will be able to:


1. Basic memory management.
2. Virtual memory.
3. Page replacement algorithms.

2
Introduction
▪ Memory management is one of the important function of
OS.
▪ Allocating the main memory space to the processes and
their data at the time of their execution..
• Moves processes back and forth between main memory
and disk during execution.
▪ Memory management also perform the activities such
as Upgrading the performance of the computer system,
Enabling the execution of multiple processes at the same
time, Sharing the same memory space among different
processes and so on
3
Functions of Memory Management
• Process Isolation: Process isolation means controlling of one process
interacts with the data and memory of other process.

• Tracking of Memory Locations: Memory management keeps track of


each and every memory location, regardless of either it is allocated to
some process or it is free. It checks how much memory is to be
allocated to processes.

• Automatic Allocation and Management: Memory should be


allocated dynamically based on the priorities of the process.
Otherwise the process waiting will increase and it decreases the CPU
utilization and the memory utilization.

• Long Term Storage: Long term storage of process will reduce 4 the
Functions of Memory Management
• Support of Modular Programming: A program is divided
into number of modules, if the memory is not sufficient for
the entire program, we can load at least some of the
modules instead of the entire program. This will increase
CPU utilization and memory utilization.
• Protection and Access Control: Do not apply the protection
mechanisms and access control to all the processes, better
to apply to the important applications only. It will save the
execution time.
• 7. Keeping Status of Main Memory Locations: Memory
management keeps track of the status of each memory
location, whether it is allocated or free.
5
Memory Management Scheme

• Two major memory management schemes are possible.


Each approach divides memory into a number of regions or
partitions.

1. Static (Fixed Sized) Memory Partitioning (or


Multiprogramming with Fixed number of Tasks (MFT))

2. Dynamic (Variable) Memory Partitioning (or


Multiprogramming with Variable number of Tasks (MVT)).

6
Static (Fixed Sized) Memory Partitioning
• In static memory partitioning, the memory is divided
into a number of fixed sized partitions and do not
change as the system runs.
• Each partition in static memory partitioning, contains
exactly one process. So the number of programs to be
executed (i.e. degree of multiprogramming) depends
on the number of partitions.
• There are two alternatives for fixed sized memory
partitioning namely, equal sized partitions (a) and
unequal sized partitions (b)
7
Static (Fixed Sized) Memory Partitioning

8
Job Scheduling in fixed sized memory partitions

• As jobs enter the system, they are put into a job


queue.
• The job scheduler takes into account the memory
requirement of each job and the available regions in
determining which jobs are allocated memory.
• When a job is allocated space, it is loaded into a
region. It can then complete for the CPU. When job
terminates, it releases its memory region, which the
job scheduler may then fill with another job from
the job queue.
9
MFT with separate queue for each region
• If we have three user memory regions of
sizes 2K, 6K, and 12K we need three
queues namely Q2, Q6 and Q12.

• An incoming job requiring 4K of memory


would be appended to Q6, a new job
needing 8K would be put in Q12, and a
job of 2K would go in Q2.

• Each queue is scheduled separately.


Since each queue has its own memory
region, there is no competition between
queues for memory.

10
MFT Advantages and disadvantages
Advantages:
• Simple to implement
• It requires minimal operating system software and processing
overhead as
• Fixed partitioning makes efficient utilization of processor and I/O
devices

Disadvantages:
• The main problem with the fixed partitioning method is how to
determine the number of partitions, and how to determine their sizes.
• Memory wastage
• Inefficient use of memory due to internal fragmentation.
• Maximum number of active processes is fixed. 11
Dynamic (Variable) Memory Partitioning

• In variable memory partitioning the partitions can vary in number


and size. In variable memory partitioning the amount of memory
allocated is exactly the amount of memory a process requires.

• The operating system keeps a table indicating which parts of


memory are available and which are occupied. Initially all memory
is available for user programs and is considered as one large block
of available memory, a hole.

• When a job arrives and needs memory, we search for a hole large
enough for this job. If we find one, we allocate only as much as is
needed, keeping the rest available to satisfy future requests.
12
Dynamic (Variable) Memory Partitioning
For example, assume 256K memory available and a
resident monitor of 40K. This situation leaves 216K
for user programs.

13
Example memory allocation and job scheduling for
MVT

14
Internal Fragmentation
Internal fragmentation occurs when the memory
allocator leaves extra space empty inside of a block of
memory that has been allocated for a client.
For example, blocks may be required to be evenly be
divided by four, eight or 16 bytes. When this occurs, a
client that needs 57 bytes of memory, for example,
may be allocated a block that contains 60 bytes, or
even 64. The extra bytes that the client doesn’t need
go to waste,

15
Internal Fragmentation

16
External Fragmentation
External fragmentation exists when enough total
memory space exists to satisfy a request, but it is not
contiguous, storage is fragmented into large number
of small holes.
For example, there is a hole of 20K and 10K is
available in multiple partition allocation schemes. The
next process request for 30K of memory. Actually 30K
of memory is free which satisfy the request but hole is
not contiguous. To there is an external fragmentation
of memory.
17
External Fragmentation

18
Dynamic Storage Allocation
• Disk space can be viewed as a large array of disk blocks. At any
given time some of these blocks are allocated to files and others
are free.

• Disk space seen as a collection of free and used segments, each


segment is a contiguous set of disk blocks. An unallocated segment
is called a Hole. The dynamic storage allocation problem is how to
satisfy a request of size ‘n’ from a list of free holes. There are many
solutions to this problem.

• The set of holes is searched to determine which hole is best to


allocate. The most common strategies used to select a free hole
from the set of available holes are first fit, best fit and worst fit.
19
Dynamic Storage Allocation
1. First Fit: Allocate the first hole (or free block) that is big enough for the
new process. Searching can start either at the beginning of the set of
holes or where the previous first fit search ended. We can stop
searching as soon as we find a large enough free hole. First fit is
generally faster.
2. Best Fit: Allocate the smallest hole that is big enough. We search the
entire list, unless the list is kept ordered by size. This strategy produces
the smallest left over hole.
3. Worst Fit: Allocate the largest hole. Again we must search the entire list,
unless it is sorted
by size.

First fit and best fit are better than worst fit in both time and storage
utilization.
First fit is generally faster
20
Dynamic Storage Allocation
Consider a swapping system in which memory consists of the following hole sizes in
memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB and 15KB. Which hole is
taken for successive
segment requests for (i) 12 KB (ii) 10KB (iii) 9 KB for first fit, best fit and worst fit.
Sol: Memory arrangements for 12 KB job are as follows:

21
Dynamic Storage Allocation
Sol: Memory arrangements for 10 KB job are as follows:

22
Free Space Management Techniques
• Files are created and deleted frequently during the operation of a
computer system. Since there is only a limited amount of disk space,
it is necessary to reuse the space from deleted files for new files.
• To keep track of free disk space, the file system maintains a free space
list. The free space list records all disk blocks, which are free.
• To create a file, we search the free space list for the required amount
of space and allocate it to the new file. This space is the removed
from the free space list. When a file is deleted, its disk space is added
to the free space list.
• The process of looking after and managing the free blocks of the disk
is called free space management. The methods are used in free space
management techniques are Bit Vector, Linked List, Grouping and
Counting.

23
Bit Vector
• The free space list may not be implemented as a list; it is
implemented as a Bit Map or Bit Vector. Bit map is series or collection
of bits where each bit corresponds to a disk block.
• Each block in bit map is represented by one bit. If the block is free, the
bit is ‘0’, if the block is allocated the bit is ‘1’.
• For example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12,
13, 17, 18, 25, 26 and 27 are free, the free space bit map would be,
11000011000000111001111110001111………

24
Grouping
• Grouping is a free space management technique for a
modification of the free list method. In grouping, there is a
modification of this approach would store the addresses of
‘n’ free blocks in the first free block. The first n-1 of these is
actually free. The last one is the disk address of another
block containing the addresses of another ‘n’ free block.
• The importance of this implementation is that the
addresses of a large number of free blocks can be found
quickly.
• In which a disk block contains addresses of many free blocks
and a block containing free block pointers will get free when
those blocks are used.
25
Grouping

26
Virtual Memory
• Virtual memory is a technique which allows the execution of
processes that may not be completely in memory.
• Virtual memory is the separation of user logical memory from
physical memory. This separation allows an extremely large
virtual memory to be provided for programmers when only a
smaller physical memory is available.
• The basic idea behind virtual memory is that the combined
size of the program, data and stack may exceed the amount of
physical memory available for it.
• The operating system keeps those parts of the program
currently in use in main memory, and the rest on the disk.

27
Virtual Memory

28
Paging
• Paging is a memory management technique by which a computer
stores and retrieves data from secondary storage for use in main
memory. In paging, the operating system retrieves data from
secondary storage in same-size blocks called pages.
• Paging is an important part of virtual memory implementations in
modern operating systems, using secondary storage to let programs
exceed the size of available physical memory.

• The basic idea behind paging is that when a process is swapped in,
the pager only loads into memory those pages that it expects the
process to need

• Paging permits a programs memory to be non-contiguous, thus


allowing a program to be allocated physical memory wherever it is
available. 29
Paging Hardware
• Every address generated by the CPU
is divided into two parts namely, a
page number (p) and a page offset
(d).

• The page number is used as an index


into a page table. The page table
contains the base address of each
page in physical memory. This base
address is combined with page offset
to define the physical address that is
sent to memory unit.

30
Paging Model of Logical and Physical Memory

31
Paging Example for 32 word memory with 4 word
pages
For example, using a page size of 4
words and physical memory of 32
words (8 pages) we show how the
user’s view of memory can be
mapped into physical memory.
Logical address 0 is page 0 offset 0.
We find that page 0 is in frame 5.

Thus logical address 0 maps to


physical address 20 = (5 × 4 + 0).
Logical address 4 is page1, offset 0.
Logical address 4 maps to physical
address (6 × 4 + 0) = 24.

32
Paging

33
Paging
Paging itself is a form of dynamic
relocation. Every logical address is
mapped by paging hardware to some
physical address. Each user page needs
one frame. Thus if the job requires n
pages, there must be n frames available
in memory.
The page of job is loaded into one of the
allocated frames and the frame number
is put in the page table for this job and so
on. Using a paging scheme we have no
external fragmentation, any free frame
can be allocated to a job that needs it.
Each jobs has its own page table. The
page table is
implemented as a set of dedicated
registers.
34
Segmentation
• Like paging segmentation is also a memory
management scheme that implements the user’s
view of a program.
• In segmentation, the entire logical address space
is considered as a collection of segments with
each segment having a number and a length.
• The length of a segment may range from 0 to
some maximum value as specified by the
hardware and may also change during the
execution. The user specifies each logical
address consisting of a segment number (s) and
an offset (d).
• A segment is a logical unit such as main
program, procedure, function, method, object,
local variables, global variables, common block,
stack, symbol table, arrays etc.

35
Segmentation Hardware
• A segment is defined as, a logical grouping of
instructions. A logical address space is a
collection of segments. Every program/job is
collection of segments such as subroutine,
array etc.
• Each segment has a name and a length.
Address specify both the segment name and
the offset within the segment. The user
specifies each address by two quantities a
segment name and an offset.
• A logical address consists of two parts a
segment number ‘s’ and an offset into that
segment ‘d’. The segment number is used as
an index into segment table. Each entry of
segment table has a segment base and a
segment limit.
36
Compaction
Compaction is a method used to overcome
the external fragmentation problem. All free
blocks are brought together as one large
block of free space.
The collection of free space from multiple
non-contiguous blocks into one large free
block in a system's memory is called
compaction.
Compaction is possible only if relocation is
dynamic, at execution time, using base and
limit registers. The simplest compaction
algorithm is to simply move all jobs towards
one end of memory, all holes move in the
other direction, producing on large hole of
available memory.
37
Page table when some pages are not in main
memory

38
Demand Paging
• Demand paging is a method of virtual memory management.
• With demand-paged virtual memory, pages are only loaded when they
are demanded during program execution; pages that are never accessed
are thus never loaded into physical memory.
• A demand-paging system is similar to a paging system with swapping,
where processes reside in secondary memory (usually a disk). When we
want to execute a process, we swap it into memory.
• Rather than swapping the entire process into memory, however, we use
a lazy swapper called pager. A lazy swapper never swaps a page into
memory unless that page will be needed.
• When a process is to be swapped in, the pager guesses which pages will
be used before the process is swapped out again. Instead of swapping in
a whole process, the pager brings only those necessary pages into
memory.
39
Page Replacement Algorithms
• When the processor needs to execute a page, and if that page is
not available in main memory then this situation is called page
fault.

• For bringing in the required page into main memory, if the space
is not available in memory then we need to remove the page
from the main memory for allocating the space to the new page
which needs to be executed.

• When a page fault occurs, the operating system has to choose a


page to remove from memory to make room for the page that
has to be brought in. This is known as page replacement.
40
FIFO (First In First Out) Page Replacement Algorithm
• The simplest page replacement algorithm is a FIFO.
• A FIFO replacement algorithm associates with each page
the time when that page was brought into memory.
• When a page must be replaced, the oldest page is chosen.
• FIFO queue is created to hold all pages in memory. We
replace the page at the head of the queue.
• When a page is brought into memory, we insert it at the tail
of the queue.
• The FIFO page replacement algorithm is easy to understand
and program. It performance is not always good.

41
FIFO (First In First Out) Page Replacement Algorithm
consider the following reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1,
2, 0..1.7.0.1
The three frames are initially empty.

There are 15 faults altogether 42


Optimal Page Replacement Algorithm
• An optimal page replacement algorithm has the lowest page fault rate of all
algorithms and would never suffer from Belady’s anomaly.
• Optimal replacement algorithm states replace that page which will not be used for
the longest period of time.
• consider the following reference string , 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7,
0, 1.

43
LRU (Least Recently Used) Page Replacement
Algorithm
• If we use the recent past as an approximation of the near
future, then we would replace that page which has not
been used for the longest period of time. This is the least
recently used algorithm.

• LRU replacement associates with each page the time of its


last use. When a page is to be replaced, LRU chooses that
page which has not been used for the longest period of
time.

• We can think of this strategy as the optimal page-


replacement algorithm looking backward in time, rather
than forward. 44
LRU (Least Recently Used) Page Replacement
Algorithm
For example consider the following reference string, 7, 0, 1,
2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1.

No. of page faults = 12. 45


Example
Q. Explain FIFO (First In First Out) page replacement algorithm for
reference string
7012030423103. (4m)
Sol: A FIFO replacement associates with each page the time when that
page was bought into memory. When the page must be replaced, the
oldest page is chosen. It maintains a FIFO queue to hold all pages in
memory. We replace the page at the head of the queue. When a page
is brought into the memory. We insert it at the tail of the
queue.Consider three frames are available.

No. of page faults = 12. 46


Thank you
Q. Explain FIFO (First In First Out) page replacement algorithm for
reference string
7012030423103. (4m)
Sol: A FIFO replacement associates with each page the time when that
page was bought into memory. When the page must be replaced, the
oldest page is chosen. It maintains a FIFO queue to hold all pages in
memory. We replace the page at the head of the queue. When a page
is brought into the memory. We insert it at the tail of the
queue.Consider three frames are available.

No. of page faults = 12. 47


Thank you
Q. Explain FIFO (First In First Out) page replacement algorithm for
reference string
7012030423103. (4m)
Sol: A FIFO replacement associates with each page the time when that
page was bought into memory. When the page must be replaced, the
oldest page is chosen. It maintains a FIFO queue to hold all pages in
memory. We replace the page at the head of the queue. When a page
is brought into the memory. We insert it at the tail of the
queue.Consider three frames are available.

No. of page faults = 12. 48


Thank you…..

You can mail your Queries to :


[email protected]

49

You might also like