0% found this document useful (0 votes)
46 views25 pages

OS Unit4

The document provides an overview of memory management, detailing concepts such as main memory, logical and physical addresses, dynamic loading and linking, and memory allocation strategies. It discusses fragmentation issues, including internal and external fragmentation, and introduces techniques like paging to manage memory efficiently. Additionally, it covers memory protection mechanisms and the role of the Memory Management Unit (MMU) in mapping logical addresses to physical addresses.

Uploaded by

ryanmathew650
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views25 pages

OS Unit4

The document provides an overview of memory management, detailing concepts such as main memory, logical and physical addresses, dynamic loading and linking, and memory allocation strategies. It discusses fragmentation issues, including internal and external fragmentation, and introduces techniques like paging to manage memory efficiently. Additionally, it covers memory protection mechanisms and the role of the Memory Management Unit (MMU) in mapping logical addresses to physical addresses.

Uploaded by

ryanmathew650
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Unit -4

Chapter-1
Memory Management
• Main Memory refers to a physical memory that is the internal memory to
the computer.
• The word main is used to distinguish it from external mass storage devices
such as disk drives.
• Main memory is also known as RAM. The computer is able to change only
data that is in main memory. Therefore, every program we execute and
every file we access must be copied from a storage device into main
memory.
• All the programs are loaded in the main memory for execution. Sometimes
complete program is loaded into the memory, but some times a certain part
or routine of the program is loaded into the main memory only when it is
called by the program, this mechanism is called Dynamic Loading, this
enhance the performance.
• Also, at times one program is dependent on some other program. In such
a case, rather than loading all the dependent programs, CPU links the
dependent programs to the main executing program when its required. This
mechanism is known as Dynamic Linking.
Logical and Physical address space
Logical Address:
• Logical Address is generated by CPU while a program is running.
• The logical address is virtual address as it does not exist physically,
therefore, it is also known as Virtual Address.
• This address is used as a reference to access the physical memory location
by CPU.
• Logical address of the program is visible to the users
• The term Logical Address Space is used for the set of all logical addresses
generated by a program’s perspective
• The hardware device called Memory-Management Unit is used for
mapping logical address to its corresponding physical address.
Physical Address
• Physical Address identifies a physical location of required data in a
memory. Physical address is like a location that is present in the main
memory.
• The user never directly deals with the physical address but can access by
its corresponding logical address.
• The user program generates the logical address and thinks that the
program is running in this logical address but the program needs physical
memory for its execution, therefore, the logical address must be mapped
to the physical address by MMU before they are used.
• The term Physical Address Space is used for all physical addresses
corresponding to the logical addresses in a Logical address space

Dynamic relocation with relocation register


Logical versus Physical Address Space
• Address generated by the CPU is referred as Logical address also called a
virtual address.
• Address as seen by memory unit or the address that is loaded into the
memory address register is called physical address.
• The set of all the logical address generated by a process is called logical
address space and set of all physical address corresponding to these logical
address is the physical address space.
• Memory management Unit maps the logical address dynamically by
adding the value in the relocation register. This mapped address is sent to
memory.
• The base register is called the relocation register. For every address
generated by a user process, the value in the relocation register is added
and sent to the memory. Base register stores the starting address of any
process.
• Physical address space=logical address + content of relocation register.
Fig: Dynamic relocation using a relocation register

[Link] will generate logical address for eg: 346


[Link] will generate a relocation register (base register) for eg: 14000
[Link] memory, the physical address is located eg:(346+14000= 14346)
The value in the relocation register is added to every address generated by a user
process at the time the address is sent to memory. The user program never sees
the real physical addresses. The program can create a pointer to location 346,
store it in memory, manipulate it, and compare it with other addresses—all like
the number 346.
The user program generates only logical addresses. However, these logical
addresses must be mapped to physical addresses before they are used.

Swapping
• Swapping is a memory management technique and is used to temporarily
remove the inactive programs from the main memory of the computer
system.
• Any process must be in the memory for its execution, but can be swapped
temporarily out of memory to a backing store and then again brought back
into the memory to complete its execution.
• Swapping is done so that other processes get memory for their execution.
Swap In and Swap Out in OS
• The procedure by which any process gets removed from the hard disk and
placed in the main memory or RAM commonly known as Swap In.
• On the other hand, Swap Out is the method of removing a process from
the main memory or RAM and then adding it to the Hard Disk

Contiguous Allocation
• Contiguous memory allocation is a memory allocation method that
allocates a single contiguous section of memory to a process or a file.

• If the blocks are allocated to the file in such a way that all the logical
blocks of the file get the contiguous physical block in the hard disk then
such allocation scheme is known as contiguous allocation.

In the image shown below, there are three files in the directory. The
starting block and the length of each file are mentioned in the table. We
can check in the table that the contiguous blocks are assigned to each file
as per its need.
Memory allocation of 2 types:
[Link] partition
[Link] sized partition or dynamic.
1)Fixed partition:
• This technique is also known as static partitioning.
• First divide the memory into certain number of partition and the size of the
memory is fixed and cannot be changed.
• Fixed partitioning, as the name suggests, involves dividing the physical
memory of a computer system into fixed-sized partitions or segments.
Each partition is reserved for a specific process or task, and once allocated,
it remains dedicated to that process until it completes or is terminated. This
technique simplifies memory allocation, as it allows for easy tracking and
retrieval of data related to each process.
• On the other hand, fixed partitioning has certain limitations. One of
the main drawbacks is that it may result in internal fragmentation.
This occurs when the allocated partition is larger than the actual size
needed by a process, leaving unused memory within the partition.
Consequently, this unused memory cannot be allocated to another process,
leading to inefficient utilization of system resources.
2) Variable sized partition:
• Unlike fixed partitioning, dynamic partitioning offers a more flexible
approach to memory allocation. In this technique, the available memory is
divided into variable-sized partitions, also known as “holes.” When a
process needs memory, it is allocated a hole that is large enough to
accommodate its size. This allows for efficient utilization of memory
resources, as partitions are allocated on-demand, reducing the chances of
internal fragmentation.
Memory protection Hardware
Memory protection: The main purpose of memory protection is to prevent a
process from accessing memory that has not been allocated to it. This prevent a
bug or malware within a process from affecting other processes, or the operating
system itself.
• Operating system can prevent from accessing other process memory. We use
two registers for this purpose:
[Link] register(Relocation register)
[Link] register
Base Register – contains the starting physical address of the process.
Limit Register -mentions the limit relative to the base address on the region
occupied by the process.(ending address of each process)
• The logical address generated by the CPU is first checked by the limit
register, If the value of the logical address generated is less than the value
of the limit register, the base address stored in the relocation register is
added to the logical address to get the physical address of the memory
location.
• Memory Management Unit maps the logical address dynamically by
adding the value in the relocation register. This mapped address is sent to
memory.
• If the logical address value is greater than the limit register, then the CPU
traps to the OS, and the OS terminates the program by giving fatal error.


• When CPU scheduler selects an process for execution, the dispatcher
supply the necessary values for limit register and relocation register
indicating the final limit and base address.
• Because every address generated by a CPU is checked against these
registers, we can protect both the operating system and the other user’s
programs and data from being modified by this running process.

Memory allocation:
Memory allocation is the process of reserving virtual or physical computer space
for a specific purpose (e.g., for computer programs and services to run).memory
allocation is a part of the of the management of computer memory resources,
known as memory management.
One of the simplest methods for allocating memory is to divide memory into
several fixed-sized partitions and each partition contains exactly one process.
Thus, the degree of multiprogramming is obtained by the number of partitions.
• To solve the problem of wasted memory and CPU idle time multiprogramming
was introduced.
• When a process terminates its execution successfully the corresponding
partition becomes free for loading another process by overwriting.
• As soon as new processes enter the system, they will be placed on an input
ready queue

Strategies for memory allocation


[Link] fit:
In the First Fit, the first available free hole fulfill the requirement of the process
allocated. Here, in this diagram, a 40KB memory block is the first available free
hole that can store process A (size of 25 KB), because the first two blocks did not
have sufficient memory space.
[Link] fit: In the Best Fit, allocate the smallest hole that is big enough to process
requirements. For this, we search the entire list, unless the list is ordered by size
.Here in this example ,first we traverse the complete list and find the last hole
25KB is the best suitable hole for Process A(size 25KB). In this method, memory
utilization is maximum compared to other memory allocation techniques.

[Link] fit:
In the Worst Fit, allocate the largest available hole to process. This method
produces the largest leftover hole. Here in this example, Process A (Size 25 KB)
is allocated to largest available memory block which is 60KB. Inefficient memory
utilization is a major issue in the worst fit.

Fragmentation:
• Fragmentation is an unwanted problem in operating system in which the
processes are loaded and unloaded from memory and free memory space
is fragmented. Processes cant be assigned to memory block due to their
small size and the memory block stay unused.
Fragmentation is defined as when the process is loaded and removed after
execution from memory, it creates a small free hole. These holes cannot be
assigned to new processes because holes not combined or do not fulfil the
memory requirement of the process.
There are two problems with memory allocation
1. Internal Fragmentation
2. External Fragmentation

[Link] fragmentation:
• It happens when memory is spilt into mounted sized blocks.
• Whenever a method is requested for the memory the mounted sized
block is allotted to the method.
• Just in case the memory allotted to the method is somewhat larger
than the memory requested then the difference between allotted and
requested memory is that the internal fragmentation.
• Consider a multiple-partition allocation scheme with the hole of 18,464
bytes. Suppose that next process request 18,462 bytes. If we allocate
exactly the requested block, we are left with a hole of 2 bytes.
• The overhead to keep track of this hole will be substantially larger than the
hole itself.
• The general approach to avoiding this problem is to break the physical
memory into fixed sized blocks and allocate memory in units based on
block size.
• With this approach, the memory allocated to a process may be slightly
larger than the requested memory.
• This difference between these two numbers is Internal Fragmentation.
It is unused memory.
The above diagram clearly shows the internal fragmentation because the
difference between memory allocated and required space or memory is called
Internal fragmentation.
[Link] Fragmentation
• The major problem with multiple partition allocation algorithms is the
development of external fragments.
• Fragmentation is the development of unusable holes or fragments.
• It happens when there is a sufficient quantity of area within the memory
to satisfy the memory request of a method. However, the process’s memory
request cannot be fulfilled because the memory offered is in a non-
contiguous manner.
• Because of contiguous memory allocation external fragmentation will
occur.
• The total space which is required is available but it is not available in
contiguous fashion is external fragmentation

You can see in the figure mentioned above that there is sufficient memory space
(55 KB) to execute a process-07 (50 KB mandated), but the storage (fragment) is
not adjacent..
Solution to External fragmentation:
Fragmentation can be reduced by
[Link]
[Link]
[Link]
Paging
• Paging is a method of writing and reading data from a secondary
storage(Drive) for use in primary storage(RAM).
• It permits the logical address space of a process to be non contiguous. The
logical memory is broken down into fixed size partitions(blocks) called
pages.
• The physical memory is divided into fixed-sized blocks called frames.
• The size of a page is same as that of a frame.
• A page number forms a part of logical address and a frame number forms
a part of a physical address.
• When a process is to be executed, its pages are loaded into available
memory frames from their source such as a file system or backing store.
• The backing store is divided into fixed sized blocks that are the same size
as the memory frames or clusters of multiple frames.
• Frame table maintains list of frame and the allocation details of the
frame(i.e) A frame is free or allocated to some page.
• Each process has its own page table. When a page is loaded into main
memory the corresponding page table is active in the system and all other
page table are inactive. page table and frame table are kept in main
memory.
• A logical address consists of two parts: A page number and page offset.
• A page number is used as an index into a page table. the page table
contains the base address (starting address) of each page in physical
memory.
• This base address is combined with the page offset to define the physical
memory address that is sent to the memory unit.
• A physical address consists of two parts: A frame number and frame
offset.
• Frame offset: refers to the number of bits necessary to represent a certain
word in a frame, or the Physical Address Space frame size, the word
number of a frame, or the frame offset.
• Frame number(f): It’s the number of bits needed to indicate a frame of
the Physical Address Space or a frame number.
Fig: Paging Hardware

Steps for handling a page fault:


• A Page Fault happens when you access a page that has been marked as
invalid. The paging hardware would notice that the invalid bit is set while
translating the address across the page table, which will cause an operating
system trap. The trap is caused primarily by the OS's failure to load the
needed page into memory.
• Page faults dominate more like an error. A page fault will happen if a
program tries to access a piece of memory that does not exist in physical
memory (main memory). The fault specifies the operating system to trace
all data into virtual memory management and then relocate it from
secondary memory to its primary memory, such as a hard disk.
Steps:
[Link] check an internal page table kept with the process control block for this
process to determine whether the reference was a valid or an invalid memory
access.
[Link] the reference was invalid, we terminate the process. If it was valid but we
have not yet brought in that page, we now page it in.
[Link] find a free frame(example: by taking one from the free-frame list).
[Link] schedule a disk operation to read the desired page into newly allocated
frame
[Link] the disk read is complete, we modify the internal table kept with the
process and also the page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the trap. The process can
now access the page in main memory.

Paging Hardware with TLB


• In Operating System for each process, page table will be created, which
will contain a Page Table Entry (PTE).
• This PTE will contain information like frame number (The address of the
main memory where we want to refer), and some other useful bits (e.g.,
valid/invalid bit, dirty bit, protection bit, etc).
• This page table entry (PTE) will tell where in the main memory the actual
page is residing.
• The idea used here is, to place the page table entries in registers, for each
request generated from the CPU (virtual address), it will be matched to the
appropriate page number of the page table, which will now tell where in
the main memory that corresponding page resides.
• but the problem is registered size is small (in practice, it can accommodate
a maximum of 0.5k to 1k page table entries) and the process size may be
big hence the required page table will also be big (let’s say this page table
contains 1M entries), so registers may not hold all the PTE’s of the Page
table. So this is not a practical approach.
The entire page table was kept in the main memory to overcome this size
issue. but the problem here is two main memory references are required:
1. To find the frame number
2. To go to the address specified by frame number
To overcome this problem a high-speed cache is set up for page table entries
called a Translation Lookaside Buffer (TLB). .If the page table contains a
large number of entries then we can use TLB(translation Look-aside buffer),
a special, small, fast look-up hardware cache. Translation Lookaside Buffer
(TLB) is a special cache used to keep track of recently used transactions. TLB
contains page table entries that have been most recently used. Given a virtual
address, the processor examines the TLB if a page table entry is present (TLB
hit), the frame number is retrieved and the real address is formed. If a page table
entry is not found in the TLB (TLB miss), the page number is used as an index
while processing the page table. TLB first checks if the page is already in main
memory, if not in main memory a page fault is issued then the TLB is updated to
include the new page entry. TLB is used to reduce adequate memory access
time as it is a high-speed associative cache.

Steps in TLB hit


1. CPU generates a virtual (logical) address.
2. It is checked in TLB (present).
3. The corresponding frame number is retrieved, which now tells where the
main memory page lies.
Steps in TLB miss
1. CPU generates a virtual (logical) address.
2. It is checked in TLB (not present).
3. Now the page number is matched to the page table residing in the main
memory (assuming the page table contains all PTE).
4. The corresponding frame number is retrieved, which now tells where the
main memory page lies.
5. The TLB is updated with new PTE(page table entry) (if space is not there,
one of the replacement techniques comes into the picture i.e either FIFO,
LRU).

Structure of the page table


Page table can be structured in 3 ways
[Link] paging
[Link] Page tables
[Link] Page tables
[Link] paging:
• It is a paging scheme that consists of two or more levels of page tables in
a hierarchical manner. It is also known as Multilevel Paging . When CPU
accesses a page of any process, it must be in main memory. Along with the
page, the page table of the same process must also be stored in the main
memory. there might me a cause where the page table is too big to fit in a
contiguous space so that we may have a hierarchy with several levels.

• In this type of paging, the logical address space is broken up into Multiple
page tables. Hierarchical paging is one of the simplest techniques, and a
two level page table and three level page table can be used for this purpose.
• The actual frame information is stored in the entries of the final level page
table. Level 1 has a single page table whose address is contained in PTBR
(Page Table Base Register).
Two Level Page Table: Two-level paging in which a page table itself is paged.
So we will have two-page tables' inner page table and outer page table.
Consider a system having 32 bit logical address space and a page size of 1 kb
it is further divided into page number consisting of 20 bits and Page Offset
consisting of 12 bits.
As we page the Page table, the page number is further divided into, Page
Numbers consisting of 10 bits and Page Offset consisting of 12 bits .
Thus the Logical address is as follows:

• In the above image, P1 is an index into the Outer page table, p2 indicates
the displacement within the page of the Inner page table
• As address translation works from outer page table inward so is known as
forward-mapped page table.

Three-Level Page Table: A two level paging scheme is not appropriate for
a system with a 64-bit logical address space. Suppose that the page size is
4KB. If we use the two scheme, then the addresses will look as follows:

Thus, to avoid such a large table, there is a solution to divide the outer page
table, and then it will result in a Three-level page table as shown below.
• In multilevel paging whatever may be levels of paging, all the page tables
will be stored in the main memory. So it requires more than one memory
access to get the physical address of the page frame. One access for each
level is needed. Each page table entry except the last level page table entry
contains the base address of the next level page table.

Working
• The page table that is larger than the frame size is divided into several parts.
• Except for the last component, the size of each part is the same as the size
of the frame.
• The page table pages are then stored in various frames of main memory.
• Another page table is maintained to keep track of the frames that store the
pages of the divided page table.
• As a consequence, the page table hierarchy is created.
• Multilevel paging is carried out until the level is reached, where the
complete page table can be stored in a single frame.
• In multilevel paging, regardless of the levels of paging, all page tables are
kept in the main memory. As a result, obtaining the physical address of a
page frame necessitates more than one memory access. Each level requires
one access. Except for the last level page table item, each page table entry
provides the base address of the subsequent level page table.
[Link] Page table:
• Hashed page table is used to handling address spaces larger than 32 bits.
• Each entry in the hash table contains a linked list of elements that hash to
the same location to handle collision.
• In Hashed Page Tables, the virtual page number in the virtual address is
hashed into the hash table.
• Hashed page tables can reduce the size of the page table by only storing
entries for pages that are currently in use. This contrasts traditional page
tables, which must store an entry for every possible virtual page number.
• Each element contains of three fields:
1. Virtual/logical page number
2. Value of the mapped page frame
3. A pointer to the next element in the linked list.
• The CPU generates a logical address for the page it needs. Now, this
logical address needs to be mapped to the physical address. This logical
address has two entries, i.e., a page number (P3) and an offset, as shown
below
• The page number from the logical address is directed to the hash function.
• The hash function produces a hash value corresponding to the page
number.
• This hash value directs to an entry in the hash table.
• The virtual page number is compared with field 1 in the first element in the
linked list.
• If there is a match , the corresponding page frame (field 2) is used to form
the desired physical address.
• If there is no match, subsequent entries in the linked list are searched for a
matching virtual page number.
Working of Hashed Page Table
The CPU generates a logical address for the page it needs. Now, this logical
address needs to be mapped to the physical address. This logical address has two
entries, i.e., a page number (P3) and an offset, as shown below.

• The page number from the logical address is directed to the hash function.
• The hash function produces a hash value corresponding to the page
number.
• This hash value directs to an entry in the hash table.
• As we have studied earlier, each entry in the hash table has a link list. Here
the page number is compared with the first element's first entry. If a match
is found, then the second entry is checked.
• In this example, the logical address includes page number P3 which does
not match the first element of the link list as it includes page number P1.
So we will move ahead and check the next element; now, this element has
a page number entry, i.e., P3, so further, we will check the frame entry of
the element, which is fr5. We will append the offset provided in the logical
address to this frame number to reach the page's physical address. So, this
is how the hashed page table works to map the logical address to the
physical address
[Link] page table:
• Most Operating Systems implement a separate page table for each process,
i.e. for the ‘n’ number of processes running on a Multiprocessing/
Timesharing Operating System, there is an ‘n’ number of page tables stored
in the memory. Sometimes when a process is very large and it occupies
virtual memory then with the size of the process, its page table size also
increases substantially.
• Inverted Page Table is the global page table which is maintained by the
Operating System for all the processes. the Inverted Page Table
structure, which consists of a one-page table entry for every main
memory frame.
• In inverted page table, the number of entries is equal to the number of
frames in the main memory. It can be used to overcome the drawbacks of
page table.
• There is always a space reserved for the page regardless of the fact that
whether it is present in the main memory or not. However, this is simply
the wastage of the memory if the page is not present.
• The Inverted Page Table is a one-page table that the Operating System
maintains for all processes.
• The overhead of holding a separate page table for each process is reduced
with the inverted page table. A fixed area of memory is required to store
the paging information of all processes together.
• Through the inverted page table, the overhead of storing an individual
page table for every process gets eliminated and only a fixed portion
of memory is required to store the paging information of all the
processes together. This technique is called inverted paging as the
indexing is done concerning the frame number instead of the logical
page number.
Because the indexing is done with respect to the frame number rather than the
logical page number, this technique is known as inverted paging.

We can save this wastage by just inverting the page table. We can save the details
only for the pages which are present in the main memory. Frames are the indices
and the information saved inside the block will be Process ID and page number.
Components of Inverted Page Table:
[Link] number: It specifies the page number range of the logical address.
[Link] id: An inverted page table contains the address space information of
all the processes in execution. Since two different processes can have a similar
set of virtual addresses, it becomes necessary in the Inverted Page Table to store
a process Id of each process to identify its address space uniquely. This is done
by using the combination of PID and Page Number. So this Process Id acts as an
address space identifier and ensures that a virtual page for a particular process is
mapped correctly to the corresponding physical frame.
[Link] offset: length of the process or size of the process

Segmentation
• Segmentation is a memory management technique in which the memory is
divided into the variable size parts. Each part is known as a segment which
can be allocated to a process. The details about each segment are stored in
a table called a segment table Segment table is stored in one (or many) of
the segments.
• Segment table contains mainly two information about segment:
1. Base: It is the base address of the segment
2. Limit: It is the length of the segment.
• The address generated by the CPU is divided into:
[Link] number (s): The segment number is used as an index to the
segment table.
[Link] offset (d): Number of bits required to represent the size of the
segment.
• When an offset is legal, it is added to the segment base to produce the
address in physical memory of the desired byte.
• If d>=segment limit, it is illegal then an addressing error trap will be
generated that indicates logical addressing attempt beyond end of segment.
• The segment table is essentially an array of base-limit register pairs.

Fig: Segmentation hardware


Advantages of Segmentation

1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.

Disadvantages of Segmentation

1. It can have external fragmentation.


2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms.

You might also like