unit 5(22516)
unit 5(22516)
Marks: 14
Prepared by:
Mrs. Kousar Ayub A.
Lecturer(Selection Grade)
Computer Engg. Dept
M. H. Saboo Siddik Polytechnic
1
Learning Outcomes
2
Introduction
▪ Memory management is one of the important function of
OS.
▪ Allocating the main memory space to the processes and
their data at the time of their execution..
• Moves processes back and forth between main memory
and disk during execution.
▪ Memory management also perform the activities such
as Upgrading the performance of the computer system,
Enabling the execution of multiple processes at the same
time, Sharing the same memory space among different
processes and so on
3
Functions of Memory Management
• Process Isolation: Process isolation means controlling of one process
interacts with the data and memory of other process.
• Long Term Storage: Long term storage of process will reduce 4 the
Functions of Memory Management
• Support of Modular Programming: A program is divided
into number of modules, if the memory is not sufficient for
the entire program, we can load at least some of the
modules instead of the entire program. This will increase
CPU utilization and memory utilization.
• Protection and Access Control: Do not apply the protection
mechanisms and access control to all the processes, better
to apply to the important applications only. It will save the
execution time.
• 7. Keeping Status of Main Memory Locations: Memory
management keeps track of the status of each memory
location, whether it is allocated or free.
5
Memory Management Scheme
6
Static (Fixed Sized) Memory Partitioning
• In static memory partitioning, the memory is divided
into a number of fixed sized partitions and do not
change as the system runs.
• Each partition in static memory partitioning, contains
exactly one process. So the number of programs to be
executed (i.e. degree of multiprogramming) depends
on the number of partitions.
• There are two alternatives for fixed sized memory
partitioning namely, equal sized partitions (a) and
unequal sized partitions (b)
7
Static (Fixed Sized) Memory Partitioning
8
Job Scheduling in fixed sized memory partitions
10
MFT Advantages and disadvantages
Advantages:
• Simple to implement
• It requires minimal operating system software and processing
overhead as
• Fixed partitioning makes efficient utilization of processor and I/O
devices
Disadvantages:
• The main problem with the fixed partitioning method is how to
determine the number of partitions, and how to determine their sizes.
• Memory wastage
• Inefficient use of memory due to internal fragmentation.
• Maximum number of active processes is fixed. 11
Dynamic (Variable) Memory Partitioning
• When a job arrives and needs memory, we search for a hole large
enough for this job. If we find one, we allocate only as much as is
needed, keeping the rest available to satisfy future requests.
12
Dynamic (Variable) Memory Partitioning
For example, assume 256K memory available and a
resident monitor of 40K. This situation leaves 216K
for user programs.
13
Example memory allocation and job scheduling for
MVT
14
Internal Fragmentation
Internal fragmentation occurs when the memory
allocator leaves extra space empty inside of a block of
memory that has been allocated for a client.
For example, blocks may be required to be evenly be
divided by four, eight or 16 bytes. When this occurs, a
client that needs 57 bytes of memory, for example,
may be allocated a block that contains 60 bytes, or
even 64. The extra bytes that the client doesn’t need
go to waste,
15
Internal Fragmentation
16
External Fragmentation
External fragmentation exists when enough total
memory space exists to satisfy a request, but it is not
contiguous, storage is fragmented into large number
of small holes.
For example, there is a hole of 20K and 10K is
available in multiple partition allocation schemes. The
next process request for 30K of memory. Actually 30K
of memory is free which satisfy the request but hole is
not contiguous. To there is an external fragmentation
of memory.
17
External Fragmentation
18
Dynamic Storage Allocation
• Disk space can be viewed as a large array of disk blocks. At any
given time some of these blocks are allocated to files and others
are free.
First fit and best fit are better than worst fit in both time and storage
utilization.
First fit is generally faster
20
Dynamic Storage Allocation
Consider a swapping system in which memory consists of the following hole sizes in
memory order: 10KB, 4KB, 20KB, 18KB, 7KB, 9KB, 12KB and 15KB. Which hole is
taken for successive
segment requests for (i) 12 KB (ii) 10KB (iii) 9 KB for first fit, best fit and worst fit.
Sol: Memory arrangements for 12 KB job are as follows:
21
Dynamic Storage Allocation
Sol: Memory arrangements for 10 KB job are as follows:
22
Free Space Management Techniques
• Files are created and deleted frequently during the operation of a
computer system. Since there is only a limited amount of disk space,
it is necessary to reuse the space from deleted files for new files.
• To keep track of free disk space, the file system maintains a free space
list. The free space list records all disk blocks, which are free.
• To create a file, we search the free space list for the required amount
of space and allocate it to the new file. This space is the removed
from the free space list. When a file is deleted, its disk space is added
to the free space list.
• The process of looking after and managing the free blocks of the disk
is called free space management. The methods are used in free space
management techniques are Bit Vector, Linked List, Grouping and
Counting.
23
Bit Vector
• The free space list may not be implemented as a list; it is
implemented as a Bit Map or Bit Vector. Bit map is series or collection
of bits where each bit corresponds to a disk block.
• Each block in bit map is represented by one bit. If the block is free, the
bit is ‘0’, if the block is allocated the bit is ‘1’.
• For example, consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12,
13, 17, 18, 25, 26 and 27 are free, the free space bit map would be,
11000011000000111001111110001111………
24
Grouping
• Grouping is a free space management technique for a
modification of the free list method. In grouping, there is a
modification of this approach would store the addresses of
‘n’ free blocks in the first free block. The first n-1 of these is
actually free. The last one is the disk address of another
block containing the addresses of another ‘n’ free block.
• The importance of this implementation is that the
addresses of a large number of free blocks can be found
quickly.
• In which a disk block contains addresses of many free blocks
and a block containing free block pointers will get free when
those blocks are used.
25
Grouping
26
Virtual Memory
• Virtual memory is a technique which allows the execution of
processes that may not be completely in memory.
• Virtual memory is the separation of user logical memory from
physical memory. This separation allows an extremely large
virtual memory to be provided for programmers when only a
smaller physical memory is available.
• The basic idea behind virtual memory is that the combined
size of the program, data and stack may exceed the amount of
physical memory available for it.
• The operating system keeps those parts of the program
currently in use in main memory, and the rest on the disk.
27
Virtual Memory
28
Paging
• Paging is a memory management technique by which a computer
stores and retrieves data from secondary storage for use in main
memory. In paging, the operating system retrieves data from
secondary storage in same-size blocks called pages.
• Paging is an important part of virtual memory implementations in
modern operating systems, using secondary storage to let programs
exceed the size of available physical memory.
• The basic idea behind paging is that when a process is swapped in,
the pager only loads into memory those pages that it expects the
process to need
30
Paging Model of Logical and Physical Memory
31
Paging Example for 32 word memory with 4 word
pages
For example, using a page size of 4
words and physical memory of 32
words (8 pages) we show how the
user’s view of memory can be
mapped into physical memory.
Logical address 0 is page 0 offset 0.
We find that page 0 is in frame 5.
32
Paging
33
Paging
Paging itself is a form of dynamic
relocation. Every logical address is
mapped by paging hardware to some
physical address. Each user page needs
one frame. Thus if the job requires n
pages, there must be n frames available
in memory.
The page of job is loaded into one of the
allocated frames and the frame number
is put in the page table for this job and so
on. Using a paging scheme we have no
external fragmentation, any free frame
can be allocated to a job that needs it.
Each jobs has its own page table. The
page table is
implemented as a set of dedicated
registers.
34
Segmentation
• Like paging segmentation is also a memory
management scheme that implements the user’s
view of a program.
• In segmentation, the entire logical address space
is considered as a collection of segments with
each segment having a number and a length.
• The length of a segment may range from 0 to
some maximum value as specified by the
hardware and may also change during the
execution. The user specifies each logical
address consisting of a segment number (s) and
an offset (d).
• A segment is a logical unit such as main
program, procedure, function, method, object,
local variables, global variables, common block,
stack, symbol table, arrays etc.
35
Segmentation Hardware
• A segment is defined as, a logical grouping of
instructions. A logical address space is a
collection of segments. Every program/job is
collection of segments such as subroutine,
array etc.
• Each segment has a name and a length.
Address specify both the segment name and
the offset within the segment. The user
specifies each address by two quantities a
segment name and an offset.
• A logical address consists of two parts a
segment number ‘s’ and an offset into that
segment ‘d’. The segment number is used as
an index into segment table. Each entry of
segment table has a segment base and a
segment limit.
36
Compaction
Compaction is a method used to overcome
the external fragmentation problem. All free
blocks are brought together as one large
block of free space.
The collection of free space from multiple
non-contiguous blocks into one large free
block in a system's memory is called
compaction.
Compaction is possible only if relocation is
dynamic, at execution time, using base and
limit registers. The simplest compaction
algorithm is to simply move all jobs towards
one end of memory, all holes move in the
other direction, producing on large hole of
available memory.
37
Page table when some pages are not in main
memory
38
Demand Paging
• Demand paging is a method of virtual memory management.
• With demand-paged virtual memory, pages are only loaded when they
are demanded during program execution; pages that are never accessed
are thus never loaded into physical memory.
• A demand-paging system is similar to a paging system with swapping,
where processes reside in secondary memory (usually a disk). When we
want to execute a process, we swap it into memory.
• Rather than swapping the entire process into memory, however, we use
a lazy swapper called pager. A lazy swapper never swaps a page into
memory unless that page will be needed.
• When a process is to be swapped in, the pager guesses which pages will
be used before the process is swapped out again. Instead of swapping in
a whole process, the pager brings only those necessary pages into
memory.
39
Page Replacement Algorithms
• When the processor needs to execute a page, and if that page is
not available in main memory then this situation is called page
fault.
• For bringing in the required page into main memory, if the space
is not available in memory then we need to remove the page
from the main memory for allocating the space to the new page
which needs to be executed.
41
FIFO (First In First Out) Page Replacement Algorithm
consider the following reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1,
2, 0..1.7.0.1
The three frames are initially empty.
43
LRU (Least Recently Used) Page Replacement
Algorithm
• If we use the recent past as an approximation of the near
future, then we would replace that page which has not
been used for the longest period of time. This is the least
recently used algorithm.
49