0% found this document useful (0 votes)
23 views

Memory Management (1)

Unit 4 covers memory management techniques including swapping, contiguous memory allocation, paging, and segmentation, which are essential for optimizing system performance. It explains the concepts of logical vs. physical address space, memory protection, and various allocation strategies, as well as the structure and types of page tables. Additionally, it discusses the benefits of virtual memory and file concepts, emphasizing the importance of efficient memory usage in operating systems.

Uploaded by

24f2002721
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Memory Management (1)

Unit 4 covers memory management techniques including swapping, contiguous memory allocation, paging, and segmentation, which are essential for optimizing system performance. It explains the concepts of logical vs. physical address space, memory protection, and various allocation strategies, as well as the structure and types of page tables. Additionally, it discusses the benefits of virtual memory and file concepts, emphasizing the importance of efficient memory usage in operating systems.

Uploaded by

24f2002721
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Unit 4 :Memory Management, Virtual

Memory,File concepts
Smt B.Soujanya
Assistant Professor
Department of Computer Science and Technology
GITAM Institute of Technology (GIT)
Visakhapatnam – 530045
Email: [email protected]
UNIT-4

Memory Management: Swapping, contiguous memory allocation, paging,


segmentation,structure of page the table.
Virtual memory: Demand paging, Copy-on-Write, page-replacement,
allocation of frames,thrashing.
File Concepts: File concept, access Methods, directory and disk structure,
protection.
Memory Management
● Memory Management is the process of controlling and coordinating computer memory,

assigning portions known as blocks to various running programs to optimize the overall

performance of the system.

● It is the most important function of an operating system that manages primary memory.

● It helps processes to move back and forward between the main memory and execution

disk.

● It helps OS to keep track of every memory location, irrespective of whether it is

allocated to some process or it remains free.


Memory Management

● Program must be brought (from disk) into memory and placed within a process for it to

be run

● Main memory and registers are only storage CPU can access directly

● Memory unit only sees a stream of addresses + read requests, or address + data and

write requests

● Register access in one CPU clock (or less)

● Main memory can take many cycles, causing a stall

● Cache sits between main memory and CPU registers

● Protection of memory required to ensure correct operation


Logical vs. Physical Address Space

● The concept of a logical address space that is bound to a separate physical


address space is central to proper memory management
○ Logical address – generated by the CPU also referred to as virtual address
○ Physical address – address seen by the memory unit

Logical and physical addresses are the same in compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses differ in execution-time address-binding
scheme.

Logical address space is the set of all logical addresses generated by a program

Physical address space is the set of all physical addresses generated by a program
Base and Limit Registers
● A pair of base and limit registers define the logical address space
● CPU must check every memory access generated in user mode to be sure it is between
base and limit for that user
Hardware Address Protection
Swapping
● Swapping is a method in which the process should be swapped temporarily from the

main memory to the backing store.

● It will be later brought back into the memory for continue execution.

● Backing store is a hard disk or some other secondary storage device that should be big

enough in order to accommodate copies of all memory images for all users.

● It is also capable of offering direct access to these memory images.


Swapping
A process can be swapped temporarily out of memory to a backing store, and then brought
back into memory for continued execution

■ Backing store – fast disk large enough to accommodate copies of all memory images for all
users; must provide direct access to these memory images

■ Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and executed

■ Major part of swap time is transfer time; total transfer time is directly proportional to the
amount of memory swapped

■ Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)
Swapping
Benefits of Swapping
Benefits of Swapping
Here, are major benefits/pros of swapping:

● It offers a higher degree of multiprogramming.


● Allows dynamic relocation. For example, if address binding at execution time is
being used, then processes can be swap in different locations. Else in case of
compile and load time bindings, processes should be moved to the same
location.
● It helps to get better utilization of memory.
● Minimum wastage of CPU time on completion so it can easily be applied to a
priority-based scheduling method to improve its performance.
Contiguous Memory Allocation
The main memory must accommodate both the operating system and the various user
processes. We therefore need to allocate the parts of the main memory in the most efficient
way possible. This section explains one common method, contiguous memory allocation.

The memory is usually divided into two partitions:

1. one for the resident operating system and


2. one for the user processes.

We can place the operating system in either low memory or high memory.

In this contiguous memory allocation, each process is contained in a single contiguous section of
memory.
Contiguous Memory Allocation
Three components

1. Memory Mapping and Protection

2. Memory Allocation

3. Fragmentation
1.Memory Mapping and Protection
● The relocation register contains the value of the smallest physical address;
● the limit register contains the range of logical addresses (for example, relocation =
100040 and limit = 74600).
● With relocation and limit registers, each logical address must be less than the limit
register;the MMU maps the logical address dynamically by adding the value in the
relocation register.
● This mapped address is sent to memory .
● When the CPU scheduler selects a process for execution, the dispatcher loads the
relocation and limit registers with the correct values as part of the context switch.
● The relocation-register scheme provides an effective way to allow the operating-system
size to change dynamically. This flexibility is desirable in many situations.
1.Memory Mapping and Protection
Hardware Support for relocation and limit registers.
2. Memory Allocation
One of the simplest methods for allocating memory is to divide memory into several
fixed-sized partitions.

● Each partition may contain exactly one process.

● Thus, the degree of multiprogramming is bound by the number of partitions.

● In this multiple partition method, when a partition is free, a process is selected from

the input queue and is loaded into the free partition.

● When the process terminates, the partition becomes available for another process.
2. Memory Allocation
This procedure is a particular instance of the general dynamic storage allocation problem, which

concerns how to satisfy a request of size n from a list of free holes. There are many solutions to

this problem. The first-fit, best-fit, and worst-fit strategies are the ones most commonly used to

select a free hole from the set of available holes.

1. First fit. Allocate the first hole that is big enough. Searching can start either at the beginning
of the set of holes or where the previous first-fit search ended. We can stop searching as
soon as we find a free hole that is large enough.
2. Best fit. Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is ordered by size. This strategy produces the smallest leftover hole.
3. Worst fit. Allocate the largest hole. Again, we must search the entire list, unless it is sorted
by size. This strategy produces the largest leftover hole, which may be more useful than the
smaller leftover hole from a best-fit approach.
3.Fragmentation
As processes are loaded and removed from memory, the free memory space is broken into little

pieces. It happens after sometimes that processes cannot be allocated to memory blocks considering

their small size and memory blocks remains unused. This problem is known as Fragmentation.

Fragmentation is of two types −

1.Internal fragmentation

Memory block assigned to process is bigger. Some portion of memory is left unused, as it cannot be

used by another process.

2.External fragmentation

Total memory space is enough to satisfy a request or to reside a process in it, but it is not contiguous,

so it cannot be used.
3.Fragmentation
The following diagram shows how fragmentation can cause waste of memory and a compaction
technique can be used to create more free memory out of fragmented memory −

External fragmentation can be reduced by compaction or shuffle memory contents to place all free
memory together in one large block. To make compaction feasible, relocation should be dynamic.
The internal fragmentation can be reduced by effectively assigning the smallest partition but large
enough for the process.
3.Fragmentation
Logical Versus Physical Address Space
The address generated by the CPU is a logical address, whereas the address actually
seen by the memory hardware is a physical address.

Addresses bound at compile time or load time have identical logical and physical
addresses.

Addresses created at execution time, however, have different logical and physical
addresses.

In this case the logical address is also known as a virtual address, and the two
terms are used interchangeably by our text.

The set of all logical addresses used by a program composes the logical
address space, and the set of all corresponding physical addresses composes
the physical address space.

The run time mapping of logical to physical addresses is handled by the


memory-management unit, MMU.

The MMU can take on many forms. One of the simplest is a modification of the
base-register scheme described earlier.

The base register is now termed a relocation register, whose value is added to
every memory request at the hardware level.
Paging
● Paging is a memory-management scheme that permits the physical address space

of a process to be noncontiguous.

● Paging avoids the considerable problem of fitting memory chunks of varying sizes

onto the backing store; most memory-management schemes used before the

introduction of paging suffered from this problem.

● The problem arises because, when some code fragments or data residing in main

memory need to be swapped out, space must be found on the backing store.
Paging
Basic Method:-

The basic method for implementing paging involves breaking physical memory into fixed-sized
blocks called frames and breaking logical memory into blocks of the same size called pages.

When a process is to be executed, its pages are loaded into any available memory frames
from the backing store.

The backing store is divided into fixed-sized blocks that are of the same size as the memory
frames.
Every address generated by the CPU is divided into two parts: a page number (p) and
a page offset (d).

The page number is used as an index into a page table. The page table contains the
base address of each page in physical memory.

This base address is combined with the page offset to define the physical memory
address that is sent to the memory unit.
Paging-Hardware
Paging
The page size (like the frame size) is defined by the hardware.

The size of a page is typically a power of 2, varying between 512 bytes and 16 MB per page,
depending on the computer architecture.

The selection of a power of 2 as a page size makes the translation of a logical address into a page
number and page offset particularly easy.

If the size of logical address space is 2'"* and a page size is 2" addressing units (bytes or words), then
the high-order m - n bits of a logical address designate the page number, and the n low-order bits
designate the page offset.

Thus, the logical address is as follows:

where p is an index into the page table and d is the displacement within the page.
Paging
Address Translation
Page address is called logical address and represented by page number and the offset.

Logical Address = Page number + page offset

Frame address is called physical address and represented by a frame number and the
offset.

Physical Address = Frame number + page offset

A data structure called page map table is used to keep track of the relation between a
page of a process to a frame in physical memory.
Paging model of logical and physical memory
Paging-Example

Consider the example where logical address, n=2 and m=4.Using a


page size of 4 bytes and a physical memory of 32 bytes(8 pages).

We show how the logical address is mapped to physical address.

1) Logical address 0 maps to 20 physical address i.e) [(5*4)+0]

2) Logical address 1 maps to 24 physical address i.e) [(6*4)+0]

3) Logical address 2 maps to 4 physical address i.e) [(1*4)+0]

4) Logical address 3 maps to 8 physical address i.e) [(2*4)+0]


Paging-Hardware-TLB
However memory access just got half as fast, because every memory access now requires
two memory accesses - One to fetch the frame number from memory and then another one to
access the desired memory location.

The solution to this problem is to use a very special high-speed memory device called the
translation look-aside buffer, TLB.

The TLB is associative, high-speed memory. Each entry in the TLB consists of two parts: a key
(or tag) and a value. When the associative memory is presented with an item, the item is
compared with all keys simultaneously.

If the item is found, the corresponding value field is returned.

The TLB contains only a few of the page-table entries. When a logical address is generated by
the CPU, its page number is presented to the TLB. If the page number is found, its frame
number is immediately available and is used to access memory.
Paging -Hardware Support
Paging-Protection
● Memory protection in a paged environment is accomplished by protection bits
associated with each frame. Normally, these bits are kept in the page table. One bit can
define a page to be read-write or read-only.
● One additional bit is generally attached to each entry in the page table: a valid-invalid
bit. When this bit is set to "valid," the associated page is in the process's logical
address space and is thus a legal (or valid) page.
● When the bit is set to"invalid,'" the page is not in the process's logical address space.
Illegal addresses are trapped by use of the valid-invalid bit.
● The operating system sets this bit for each page to allow or disallow access to the page.
Paging -Protection
Structure of Page Table
Structure of page table simply defines, in how many ways a page table can be structured. Well, the
paging is a memory management technique where a large process is divided into pages and is
placed in physical memory which is also divided into frames.

Frame and page size is equivalent. The operating system uses a page table to map the logical
address of the page generated by CPU to its physical address in the main memory.

Structure of Page Table

1. Hierarchical Page Table


2. Hashed Page Table
3. Inverted Page Table
1.Hierarchical Paging
Another name for Hierarchical Paging is multilevel paging.
● There might be a case where the page table is too big to
fit in a contiguous space, so we may have a hierarchy with
several levels.
● In this type of Paging the logical address space is broke
up into Multiple page tables.
● Hierarchical Paging is one of the simplest techniques and
for this purpose, a two-level page table and three-level
page table can be used.
● So we will have two-page tables’ inner page table and
outer page table. We have a logical address with page
number 20 and page offset 12.
● As we are paging a page table the page number will
further get split to 10-bit page number and 10 bit offset .
2.Hashed Page Table
The hashed page table is a convenient way to structure
the page table where logical address space is beyond
32 bits.

The hash table has several entries where each entry


has a link list.

Each link list has a set of linked elements where each


element hash to the same location.

Each element has three entries page number, frame


number and pointer to the next element.
The page number from the logical address is directed to
the hash function. The hash function produces a hash
value corresponding to the page number. This hash
value directs to an entry in the hash table.
3.Inverted Page Table
Consider we have six processes in execution.

So, six processes will have some or the other of their page in
the main memory which would compel their page tables also to
be in the main memory consuming a lot of space. This is the
drawback of the paging concept.

The inverted page table is the solution to this wastage of


memory.

The concept of an inverted page table involves the existence of


single-page table which has entries of all the pages (may they
belong to different processes) in main memory along with the
information of the process to which they are associated.
Segmentation
● Most users ( programmers ) do not think of their programs as existing in one continuous linear
address space.
● Rather they tend to think of their memory in multiple segments, each dedicated to a particular use,
such as code, data, the stack, the heap, etc.
● Memory segmentation supports this view by providing addresses with a segment number (
mapped to a segment base address ) and an offset from the beginning of that segment.
Segmentation
Segmentation Hardware Example of Segmentation

You might also like