0% found this document useful (0 votes)
9 views14 pages

Memory Management

The document discusses memory management, focusing on address binding methods (compile time, load time, execution time), dynamic loading, and linking. It explains logical versus physical address spaces, swapping processes, and memory allocation strategies (contiguous, fixed, and variable partitions), along with fragmentation types and compaction. Additionally, it covers paging as a solution to fragmentation, the implementation of page tables, and memory protection mechanisms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views14 pages

Memory Management

The document discusses memory management, focusing on address binding methods (compile time, load time, execution time), dynamic loading, and linking. It explains logical versus physical address spaces, swapping processes, and memory allocation strategies (contiguous, fixed, and variable partitions), along with fragmentation types and compaction. Additionally, it covers paging as a solution to fragmentation, the implementation of page tables, and memory protection mechanisms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 14

MEMORY MANAGEMENT

Memory: - it is a large array of words or bytes, each with its own address.
Addressing binding:-
The binding of instructions and data to memory addresses can be called address
binding.

The binding can be done at 3 ways.


1) Compile time 2) load time 3) execution time.
Compile time: - binding can be done at the compilation time. i.e. if it is known at the
compile time where the process will reside in memory, then the absolute code can be
generated.
Load time: -binding can be done at the load time. If it is not known at compile time,
where the process will reside in memory, then the compiler must generate the relocatable
code. In this case the final binding is delayed until load time.
Execution time: -the binding must be delayed until run time.
Dynamic loading:-
The loading is being postponed until execution time. To obtain the better memory
utilization we can use dynamic loading. In this a routine is not loaded until it is called.
All routines are kept on disk in a relocartable load format.
In this unused routine is never loaded.
Dynamic linking:-the linking being postponed until execution time. Most OS support
static linking, in which all system language libraries are treated as module and kept in
memory. For this much memory is wasted. But in dynamic linking it used the stub. It is a
small piece of code that indicates how to load the library if the routine is not already
present.
Logical versus physical address space: -
Logical address:
1) The address generated by the C.P.U or user process is commonly referred as logical
address.
2) It is relative address.
3) The set of all logical addresses generated by a program is called logical address
space. The user programmer deals with logical addresses.
4) The logical address is used in user mode.
Physical Address:-
1) An address seen by the memory unit is called physical address.
2) It is absolute address.
3) The set of all physical addresses corresponding to these logical addresses is referred
4) A computer system has a physical memory, which is an Hard ware Device.
5) Physical address is only used in system mode.
6) To as a physical address space. User program never see the real physical address.
In compile time and load time address binding the logical address and physical
addresses are same.

.
Swapping: -
A process needs to be in memory to be executed. A process however can be swapped
temporarily out of memory to a backing store, and then brought back in to memory for
continued execution.
EX: - In round robin C.P.U scheduling algorithm, when a quantum expires, the memory
manager will start to swap out the process from the memory, and to swap in another
process to that memory space.
EX: - In preemptive priority base algorithm if a higher priority process arrives and
wants service, the memory manager can swap out the lower priority process, so that it
can be load and execute the higher priority process. When higher priority process
finishes, the lower priority process can be swapped back in and continued. This is called
Roll Out and Roll In.
Swapping requires a backing store. The backing store is commonly a fast disk. It must
be large enough to accommodate copies of all memory images for all users. And must
provide direct access to these memory images.
Never swap the process with pending I/O.
CONTIGUOUS ALLOCATION:-
The memory is usually divided in to two partitions. One for resident O.S and other
for the user processes. It is possible to place the O.S in either low or high memory. But
usually O.S is kept in low memory because the interrupt vector is often in low memory.
There are two types.
1) Single partition allocation
2) Multiple partition allocation
a) Fixed sized partition (M.F.T)
b) Variable sized partition (M.V.T)
Single partition allocation: -
The O.S is residing in low memory and the user processes are executing in high memory.
We need to protect the O.S code and data from changes by the user processes. We can
provide this protection by using a relocation register and limit registers. The relocation
register contains value of the smallest physical address and limit register contains the
range of Logical addresses.

In the above fig the address generated by C.P.U is compared with limit register, if the
L.A is < limit register then the value of logical address is added with relocation register
and mapped address is sent to memory.
Disadvantages: -
1) Much memory is wasted.
2) Only one user process can run at a time.
Multiple partition allocation: -
If there are several user processes residing in memory at the same time, then the
problem can occur, how to allocate available memory to the various processes.
One of the simplest schemes for memory allocation is to divide the memory.
1) Fixed sized partition (M.F.T): -Memory is divided in to a no.of fixed sized
partitions. Each partition may contain exactly one process. Thus the degree of
multiprogramming is bound by the no .of partitions. When a partition is free, a process
is selected from the input queue and is loaded in to the free partitions. When the process
terminates the partition becomes available for another process.
Disadvantages: -
1) In M.F.T, the internal fragmentation can occur.
2) Suppose the process needs less memory than its allocated partition, and then some
memory is wasted.
3) Similarly a process needs large memory than its allocated partition size, and then
the problem will occur.
Variable sized partitions (M.V.T): - here the O.S maintains a table indicating which
parts of memory are available and which are occupied. Initially all memory is available
for user processes, and is considered as one large block of available memory, called hole.
When a process arrives and needs a memory, we search for a hole large enough for this
process. If we find one we allocate only as much memory as is needed, remaining is keep
available to satisfy the other request.

 Generally as the process enters the system; they are put in to an input queue.
 The O.S can take the information from each process how much memory it needs.
And O.s can also find how much memory is available and determining which
processes is allocated memory.
 Process loads into memory. It executes and when the process terminates, it release
its memory. This memory is filled with other process by O.S.
 We search for a hole among the set of holes to allocate the process, then 3 strategies
used to select a free hole from a set of available holes.
1) First fit 2) best fit 3) worst fit
1) First fit: - allocate the first hole that is big enough.
2) Best hole: - allocate the smallest hole that is big enough.
3) Worst hole: - allocate the largest hole.
Disadvantages: -
1) In which external fragmentation can occurs.
Fragmentation: -
The wastage of memory space is called fragmentation. There are two types.
1) Internal fragmentation 2) External fragmentation.
1) External fragmentation: - it exists when enough total memory space is available to
satisfy a request, but it is not contiguous.
Ex:- In this example the process p7 can request memory, total available memory space is
satisfies the p7 request, but that space is not contiguous.

2) Internal fragmentations:-
The wastage of memory at the internal block.
EX:- In M.F.T a process is allocated to a fixed partition . The size of each partition is 5
bytes. The process requesting memory 4 bytes. Here 1 byte of memory is wasted in
internal block. This is called internal fragmentation.
Another type of problem arises in multiple partition allocation is
Suppose that the next process request 18,462 bytes. If we allocate exactly the requested
block, then 2 bytes of hole is left free. To maintain the 2 bytes of free hole O.S needs
more memory than this memory. So the general approach is to allocate very small holes
as part of the larger request. Thus the allocated memory may be slightly larger than the
requested memory. The difference between these two numbers is internal fragmentation.

Compaction: - one solution to the problem of external fragmentation is compaction.


The goal of the compaction is to shuffle the memory contents to place all the free
memory to gather in one large block.

In above figure, here the three holes of size 20k, 50k and 80k can be compacted in to
one hole of size 150k.
Compaction is not always possible. We moved the processes, then these processes to be
able to execute in their new locations, internal addresses must be relocated. If the
relocation is static, compaction is not possible.
If the relocation is dynamic then compaction is possible. When a compaction is possible,
we must determine its cost.
The simplest compaction algorithm is to move all processes towards one end of
memory, all holes move in the other direction, and producing one large hole of
available memory. But it is very expensive.
We note that one large hole of available memory is not at the end of memory, but rather
is in the middle.

Paging: -
 Another possible solution to the external fragmentation problem is to paging.
 The physical memory is broken into fixed sized blocks called frames.
 The logical memory is also broken in to blocks of the same size called pages.
 When the process is to be executed, its pages are loaded into any available memory
frames from the backing store.
 The backing store is divided in to fixed sized blocks that are of the same size as the
memory frames.
Paging hardware: -
Every address generated by the C.P.U is divided in to two parts:
1) Page number 2) page offset
 The page number is used as an index in to a page table.
 The page table contains the base address of each page in physical memory.
 This base address is combined with the page offset to define the physical memory.

The page size is defined by the H/W. The size of a page is typically a power of 2.
Power of 2 is selected as page size because translation of a logical address in to a page
number and page offset is easy.
 If the size of logical address is 2m and page size is 2n bytes then the high-order m-n
bits of a logical address contains page number and the n-low-order bits designate the
page offset.
page number page offset
p d
m -n n
 By using paging we have no external fragmentation. Any free frame is allocated to a
process that needs it.
 However we may have some internal fragmentation. Suppose the memory a
requirement of a process is do not fix on page boundaries, the last frame allocated but
may not be completely full.
Ex: - a process would need ‘n’ pages plus one byte. It would be allocated n+ 1 frame,
resulting, and an internal fragmentation.
 When a process arrives in the system to be executed, its size expressed in pages, is
examined. Each page of the process needs one frame.
 Thus if the process requires n’ pages, there must be at least ’n’ frames available in
memory.
 The first page of the process is loaded in to one of the allocated frame and the frame
number is put in the page table for this process.
 The next page is loaded in to another frame and its frame number is put in to the page
table and so on.

Paging Example: in this example Logical the logical memory size is 16 bytes, and
physical memory size is 32 bytes. The page size is 4 bytes. No.of pages are 4 and 32
frames are there in Main memory.
 The O.S is managing physical memory. The O.S must be maintaining the allocation
details of physical memory. Which frame are allocated, which frame are available,
how many total frames there are and soon. This information generally kept in a data
structure called frame table.
 The O.S maintains a copy of the page table for each process. Paging there fore
increases the context switch time
Implementation of the page table: -
The page table can be implemented in 3 different ways.
1) The page table can be implemented as a set of dedicated registers. These registers
are built with very high logic to easy translation of logic address. The C.P.U
dispatcher reloads these registers.
 The use of registers for page table is satisfactory if the page table is small.
The page table to be very large then the registers are not feasible.
2) In this method the page table is kept in main memory and page table base register
(PTBR) points to the page table. Changing page table requires changing only this one
register, reducing the context switch.
 The problem with this approach is the time required to access a user memory
location.
 If we want to access location i , we must first index in to the page table using
PTBR, this task requires memory access.
 It provides us frame number, which is combined with page offset to produce
the actual address. We can access the desired place in memory.
 This scheme requires two memory accesses.
3) The solution to this problem is to use special registers called associative registers or
translation look aside buffer (TLB’s). A set of associative registers is built with very
high-speed memory. Each register consists of two parts.
1) A key and 2) a value.
When the associative registers are presented with an item it is compared with all keys
simultaneously. If item is found the corresponding value field is output. The
searching is very fast.

 In the above fig the associative registers contains only a few of the page table
entries.
 When a logical address is generated by C.P.U its page number is presented as a set
of associative registers that contains page number and their corresponding frame
number.
 If a page number is found in associative registers, its frame number is immediately
available and is used to access memory.
 If page number is not in associative registers, a memory reference to the page table
must be made. Suppose if TLB is full then the O.S must select one for replacement.
 The percentage of times that a page number is found in the associative registers is
called hit ratio. An 80 percent of hit ratio means that we find the desired page
number in associative registers 80 percent of time.
 If the page number is in the associative registers then it takes 20 nanoseconds to
search the associative registers and 100 nanoseconds to access memory. So the
mapped memory access takes 120 nanoseconds.
 If the page number is not in associative registers then it takes 220 nanoseconds. i.e 20
for searching in associative registers 100 for first access memory for the page table
and frame number 100 for access the desired byte in memory.
PROTECTION: -
Memory protection in paged environment is accomplished by two ways.
1) By using valid-invalid bit.
2) Read only (or) read& write.
 These protection bits are attached to the each entry of the page table.
 Here the valid-invalid bit is attached to the each entry in the page table. That bit
containing “v” if it is valid, i.e. the page of the process is in its logical address space.
This bit containing “i” if it is invalid. i.e page of process is not in its logical address
space.

Here a process containing 0 to 5 (six) pages. So page numbers 6,7,8 will have “i” in
the valid-invalid bit. So if pages 6,7,8 if referenced then trap will occur.
 Other method is use read& write or read-only bits to the each entry of the page
table. The page table can be checked to verify that no writes are being made to a
read only page.
Structure of Page Table:
1. Hierarchical Paging/ Multi Level Paging
2. Hashed Page Tables
3. Inverted Page Tables
MULTI LEVEL paging:-
 Suppose if a process requires 232 bytes of memory and the page size is 212 . Then
the number of pages will be 220
 Instead of maintaining a single page table, using paging to page table.

TWO level paging:-


Our actual logical address is 2 32then the logical address is divided in to as follows.
| Page number| offset|
20bits 12
The above page number is again divided in to two pages.

Page1 can be treated as a page1+page (offset). If again page1 is splitted in to page


and offset is called three level paging.
where p1 is an index into the outer page table, and p2 is the displacement within
the page of the inner page table. This is known as forward-mapped page table
 A system with a 64-bits logical address space, two level paging schemes is
no longer appropriate. So a three level paging scheme must be used.
 If page size is 4 KB (212) Logical address space is 64-bit, then page table has
252 entries. If two level scheme, inner page tables could be 210 4-byte entries.
Address would look like

o Outer page table has 242 entries or 244 bytes


o One solution is to add a 2nd outer page table
 But in the following example the 2nd outer page table is still 234
bytes in size, And possibly 4 memory access to get to one
physical memory location

Hashed Page Tables:


 The virtual page number is hashed into a page table. This page table contains a
chain of elements hashing to the same location.
 Each element contains (1) the virtual page number (2) the value of the mapped
page frame (3) a pointer to the next element.
 Virtual page numbers are compared in this chain searching for a match.
 If a match is found, the corresponding physical frame is extracted
INVERTED PAGE TABLE:-
1) Each process has a page table associated with it, which has one entry for each
page.
2) This table representation is natural, since reference pages through the page’s
logical address.
Draw backs:-
In this technique each page table may consist of millions of entries. These tables
consume large amount of physical memory.
 This problem is solved by using inverted page table. It has one entry for each
page of memory.
 Each entry consists of the virtual address of the page. These addresses are stored
in real memory location with information.
 Thus only one page table in the system has only one entry for each page of
physical memory.

 Each logical address in the system consists of a triple


<process-id, page no, offset>
 Each inverted page table entry is a pair <process-id, page no>
 When a memory reference occurs, part of the virtual address consisting of
<process-id, page no> is presented to the memory subsystem.
 The inverted page table is then searched for a match.
 If a match is found say at entry I, then the physical address <i, offset> is made.
 If no match is found then an illegal address access has attempted.
 This scheme decrease the amount of memory needed to store each page table.
But searching takes far too long. For this problem use hash tables to limit
search.

Shared pages: -
Another advantages of paging is the sharing common code

Write example.

You might also like