Memory Management
Memory Management
Memory: - it is a large array of words or bytes, each with its own address.
Addressing binding:-
The binding of instructions and data to memory addresses can be called address
binding.
.
Swapping: -
A process needs to be in memory to be executed. A process however can be swapped
temporarily out of memory to a backing store, and then brought back in to memory for
continued execution.
EX: - In round robin C.P.U scheduling algorithm, when a quantum expires, the memory
manager will start to swap out the process from the memory, and to swap in another
process to that memory space.
EX: - In preemptive priority base algorithm if a higher priority process arrives and
wants service, the memory manager can swap out the lower priority process, so that it
can be load and execute the higher priority process. When higher priority process
finishes, the lower priority process can be swapped back in and continued. This is called
Roll Out and Roll In.
Swapping requires a backing store. The backing store is commonly a fast disk. It must
be large enough to accommodate copies of all memory images for all users. And must
provide direct access to these memory images.
Never swap the process with pending I/O.
CONTIGUOUS ALLOCATION:-
The memory is usually divided in to two partitions. One for resident O.S and other
for the user processes. It is possible to place the O.S in either low or high memory. But
usually O.S is kept in low memory because the interrupt vector is often in low memory.
There are two types.
1) Single partition allocation
2) Multiple partition allocation
a) Fixed sized partition (M.F.T)
b) Variable sized partition (M.V.T)
Single partition allocation: -
The O.S is residing in low memory and the user processes are executing in high memory.
We need to protect the O.S code and data from changes by the user processes. We can
provide this protection by using a relocation register and limit registers. The relocation
register contains value of the smallest physical address and limit register contains the
range of Logical addresses.
In the above fig the address generated by C.P.U is compared with limit register, if the
L.A is < limit register then the value of logical address is added with relocation register
and mapped address is sent to memory.
Disadvantages: -
1) Much memory is wasted.
2) Only one user process can run at a time.
Multiple partition allocation: -
If there are several user processes residing in memory at the same time, then the
problem can occur, how to allocate available memory to the various processes.
One of the simplest schemes for memory allocation is to divide the memory.
1) Fixed sized partition (M.F.T): -Memory is divided in to a no.of fixed sized
partitions. Each partition may contain exactly one process. Thus the degree of
multiprogramming is bound by the no .of partitions. When a partition is free, a process
is selected from the input queue and is loaded in to the free partitions. When the process
terminates the partition becomes available for another process.
Disadvantages: -
1) In M.F.T, the internal fragmentation can occur.
2) Suppose the process needs less memory than its allocated partition, and then some
memory is wasted.
3) Similarly a process needs large memory than its allocated partition size, and then
the problem will occur.
Variable sized partitions (M.V.T): - here the O.S maintains a table indicating which
parts of memory are available and which are occupied. Initially all memory is available
for user processes, and is considered as one large block of available memory, called hole.
When a process arrives and needs a memory, we search for a hole large enough for this
process. If we find one we allocate only as much memory as is needed, remaining is keep
available to satisfy the other request.
Generally as the process enters the system; they are put in to an input queue.
The O.S can take the information from each process how much memory it needs.
And O.s can also find how much memory is available and determining which
processes is allocated memory.
Process loads into memory. It executes and when the process terminates, it release
its memory. This memory is filled with other process by O.S.
We search for a hole among the set of holes to allocate the process, then 3 strategies
used to select a free hole from a set of available holes.
1) First fit 2) best fit 3) worst fit
1) First fit: - allocate the first hole that is big enough.
2) Best hole: - allocate the smallest hole that is big enough.
3) Worst hole: - allocate the largest hole.
Disadvantages: -
1) In which external fragmentation can occurs.
Fragmentation: -
The wastage of memory space is called fragmentation. There are two types.
1) Internal fragmentation 2) External fragmentation.
1) External fragmentation: - it exists when enough total memory space is available to
satisfy a request, but it is not contiguous.
Ex:- In this example the process p7 can request memory, total available memory space is
satisfies the p7 request, but that space is not contiguous.
2) Internal fragmentations:-
The wastage of memory at the internal block.
EX:- In M.F.T a process is allocated to a fixed partition . The size of each partition is 5
bytes. The process requesting memory 4 bytes. Here 1 byte of memory is wasted in
internal block. This is called internal fragmentation.
Another type of problem arises in multiple partition allocation is
Suppose that the next process request 18,462 bytes. If we allocate exactly the requested
block, then 2 bytes of hole is left free. To maintain the 2 bytes of free hole O.S needs
more memory than this memory. So the general approach is to allocate very small holes
as part of the larger request. Thus the allocated memory may be slightly larger than the
requested memory. The difference between these two numbers is internal fragmentation.
In above figure, here the three holes of size 20k, 50k and 80k can be compacted in to
one hole of size 150k.
Compaction is not always possible. We moved the processes, then these processes to be
able to execute in their new locations, internal addresses must be relocated. If the
relocation is static, compaction is not possible.
If the relocation is dynamic then compaction is possible. When a compaction is possible,
we must determine its cost.
The simplest compaction algorithm is to move all processes towards one end of
memory, all holes move in the other direction, and producing one large hole of
available memory. But it is very expensive.
We note that one large hole of available memory is not at the end of memory, but rather
is in the middle.
Paging: -
Another possible solution to the external fragmentation problem is to paging.
The physical memory is broken into fixed sized blocks called frames.
The logical memory is also broken in to blocks of the same size called pages.
When the process is to be executed, its pages are loaded into any available memory
frames from the backing store.
The backing store is divided in to fixed sized blocks that are of the same size as the
memory frames.
Paging hardware: -
Every address generated by the C.P.U is divided in to two parts:
1) Page number 2) page offset
The page number is used as an index in to a page table.
The page table contains the base address of each page in physical memory.
This base address is combined with the page offset to define the physical memory.
The page size is defined by the H/W. The size of a page is typically a power of 2.
Power of 2 is selected as page size because translation of a logical address in to a page
number and page offset is easy.
If the size of logical address is 2m and page size is 2n bytes then the high-order m-n
bits of a logical address contains page number and the n-low-order bits designate the
page offset.
page number page offset
p d
m -n n
By using paging we have no external fragmentation. Any free frame is allocated to a
process that needs it.
However we may have some internal fragmentation. Suppose the memory a
requirement of a process is do not fix on page boundaries, the last frame allocated but
may not be completely full.
Ex: - a process would need ‘n’ pages plus one byte. It would be allocated n+ 1 frame,
resulting, and an internal fragmentation.
When a process arrives in the system to be executed, its size expressed in pages, is
examined. Each page of the process needs one frame.
Thus if the process requires n’ pages, there must be at least ’n’ frames available in
memory.
The first page of the process is loaded in to one of the allocated frame and the frame
number is put in the page table for this process.
The next page is loaded in to another frame and its frame number is put in to the page
table and so on.
Paging Example: in this example Logical the logical memory size is 16 bytes, and
physical memory size is 32 bytes. The page size is 4 bytes. No.of pages are 4 and 32
frames are there in Main memory.
The O.S is managing physical memory. The O.S must be maintaining the allocation
details of physical memory. Which frame are allocated, which frame are available,
how many total frames there are and soon. This information generally kept in a data
structure called frame table.
The O.S maintains a copy of the page table for each process. Paging there fore
increases the context switch time
Implementation of the page table: -
The page table can be implemented in 3 different ways.
1) The page table can be implemented as a set of dedicated registers. These registers
are built with very high logic to easy translation of logic address. The C.P.U
dispatcher reloads these registers.
The use of registers for page table is satisfactory if the page table is small.
The page table to be very large then the registers are not feasible.
2) In this method the page table is kept in main memory and page table base register
(PTBR) points to the page table. Changing page table requires changing only this one
register, reducing the context switch.
The problem with this approach is the time required to access a user memory
location.
If we want to access location i , we must first index in to the page table using
PTBR, this task requires memory access.
It provides us frame number, which is combined with page offset to produce
the actual address. We can access the desired place in memory.
This scheme requires two memory accesses.
3) The solution to this problem is to use special registers called associative registers or
translation look aside buffer (TLB’s). A set of associative registers is built with very
high-speed memory. Each register consists of two parts.
1) A key and 2) a value.
When the associative registers are presented with an item it is compared with all keys
simultaneously. If item is found the corresponding value field is output. The
searching is very fast.
In the above fig the associative registers contains only a few of the page table
entries.
When a logical address is generated by C.P.U its page number is presented as a set
of associative registers that contains page number and their corresponding frame
number.
If a page number is found in associative registers, its frame number is immediately
available and is used to access memory.
If page number is not in associative registers, a memory reference to the page table
must be made. Suppose if TLB is full then the O.S must select one for replacement.
The percentage of times that a page number is found in the associative registers is
called hit ratio. An 80 percent of hit ratio means that we find the desired page
number in associative registers 80 percent of time.
If the page number is in the associative registers then it takes 20 nanoseconds to
search the associative registers and 100 nanoseconds to access memory. So the
mapped memory access takes 120 nanoseconds.
If the page number is not in associative registers then it takes 220 nanoseconds. i.e 20
for searching in associative registers 100 for first access memory for the page table
and frame number 100 for access the desired byte in memory.
PROTECTION: -
Memory protection in paged environment is accomplished by two ways.
1) By using valid-invalid bit.
2) Read only (or) read& write.
These protection bits are attached to the each entry of the page table.
Here the valid-invalid bit is attached to the each entry in the page table. That bit
containing “v” if it is valid, i.e. the page of the process is in its logical address space.
This bit containing “i” if it is invalid. i.e page of process is not in its logical address
space.
Here a process containing 0 to 5 (six) pages. So page numbers 6,7,8 will have “i” in
the valid-invalid bit. So if pages 6,7,8 if referenced then trap will occur.
Other method is use read& write or read-only bits to the each entry of the page
table. The page table can be checked to verify that no writes are being made to a
read only page.
Structure of Page Table:
1. Hierarchical Paging/ Multi Level Paging
2. Hashed Page Tables
3. Inverted Page Tables
MULTI LEVEL paging:-
Suppose if a process requires 232 bytes of memory and the page size is 212 . Then
the number of pages will be 220
Instead of maintaining a single page table, using paging to page table.
Shared pages: -
Another advantages of paging is the sharing common code
Write example.