0% found this document useful (0 votes)
53 views

OS Unit III

Logical addresses generated by the CPU are referred to as virtual addresses. The memory management unit (MMU) dynamically maps virtual addresses to physical addresses in memory. This allows for non-contiguous allocation of memory to processes. The MMU uses a relocation register to translate virtual addresses to physical by adding the register value. Swapping is a process management technique where the OS moves blocked processes between main memory and disk to increase processor utilization. It allows suspended processes to be activated from disk when needed.

Uploaded by

Mukul Pal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

OS Unit III

Logical addresses generated by the CPU are referred to as virtual addresses. The memory management unit (MMU) dynamically maps virtual addresses to physical addresses in memory. This allows for non-contiguous allocation of memory to processes. The MMU uses a relocation register to translate virtual addresses to physical by adding the register value. Swapping is a process management technique where the OS moves blocked processes between main memory and disk to increase processor utilization. It allows suspended processes to be activated from disk when needed.

Uploaded by

Mukul Pal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Unit III

Logical Versus Physical Address Space


 An address generated by the CPU is commonly referred to as a logical address, whereas an address seen
by the memory unit -that is, the one loaded into the memory-address register of the memory- is commonly
referred to as a physical address.
 The compile-time and load-time address-binding methods generate identical logical and physical addresses.
 However the execution-time address-binding scheme results in differing logical and physical addresses. In
this case, we usually refer to the logical address as a virtual address.
 The run-time mapping from virtual to physical addresses is done by a hardware device called the memory-
management unit (MMU).
relocation
register
14000
logical physical
address address memory
CPU 346 + 14346

MMU
Figure 8.4: Dynamic relocation using a relocation register.
 For the time being, we illustrate this mapping with a simple MMU scheme, which is a generalization of the
base-register scheme (see Fig. 8.4)).
o The base register is now called a relocation register.
o The value in the relocation register is added to every address generated by a user process at the time
it is sent to memory
o For example, if the base is at 14000, then an attempt by the user to address location 0 is dynamically
relocated to location 14000; an access to location 346 is mapped to location 14346.
 The user program never sees the real physical addresses. The program can create a pointer to location 346,
store it in memory, manipulate it, and compare it with other addresses -all as the number 346.
 The user program deals with logical addresses. The memory-mapping hardware converts logical addresses
into physical addresses.
 The concept of a logical address space that is bound to a separate physical address space is central to proper
memory management.
Swapping: Swapping is a simple memory/process management technique used by the operating system(os) to
increase the utilization of the processor by moving some blocked process from the main memory to the secondary
memory(hard disk);thus forming a queue of temporarily suspended process and the execution continues with the
newly arrived process. After performing the swapping process, the operating system has two options in selecting a
process for execution :
*Operating System can admit newly created process
*operating system can activate suspended process from the swap memory.

swapping of two process using a disk as backing store

1
Unit III
Swapping can be implemented in various ways. For example, swapping can be priority based. That means if a
higher priority process arrives and wants service, the memory manager can swap out the lower priority process to
the secondary memory so that it higher priority process can be loaded to main memory for execution. As soon
higher priority process finishes, the lower priority process will swapped back to main memory and execution will
be continued. Sometimes swapping is also called roll out, roll in.
Partitions in swapping do not have to be fixed, as was the case with the previous method. One problem that arises
as a result of swapping with variable-sized partitions is the creation of holes in memory. Holes are areas of unused
memory between partitions. They are usually too small to hold a process and too scattered out to be of any use
whatsoever.
The solution is to combine all the holes together, allowing a process to move into that space on the next cycle. This
technique is called memory compaction. Since memory compaction has a large overhead (takes a considerable
amount of time to complete), various other algorithms are used by the memory manager to allocate processes to
holes. These are described below:
First fit: Looks for the first hole that is big enough for the process and uses it. It breaks it into two depending on
the process's size.
Best fit: Looks for the hole that has the closest size to that of the process, instead of breaking up a big hole that
may be used by a big process at a later stage.
Worst fit: Looks for the largest available hole to break up for a process. This algorithm reasons that a larger hole
that is broken up will have part of it left behind to be useful to another process as opposed to best fit, which looks
for a hole that is just the right size for a process, but will leave behind a very small and possibly unusable hole.
Contiguous Memory Allocation: The main memory must accommodate both the operating system and the
various user processes. We therefore need to allocate different parts of the main memory in the most efficient way
possible.
The memory is usually divided into two partitions: one for the resident operating system, and one for the user
processes. We may place the operating system in either low memory or high memory. With this approach each
process is contained in a single contiguous section of memory.
One of the simplest methods for memory allocation is to divide memory into several fixed-sized partitions. Each
partition may contain exactly one process. In this multiple-partition method, when a partition is free, a process is
selected from the input queue and is loaded into the free partition. When the process terminates, the partition
becomes available for another process. The operating system keeps a table indicating which parts of memory are
available and which are occupied. Finally, when a process arrives and needs memory, a memory section large
enough for this process is provided.
Note: Depending on the method used, this approach can suffer from external as well as internal memory
fragmentation.
Fragmentation: occurs in a dynamic memory allocation system when many of the free blocks are too small to
satisfy any request.
Refers to the condition of a disk in which files are divided into pieces scattered around the disk. Fragmentation
occurs naturally when you use a disk frequently, creating, deleting, and modifying files. At some point, the
operating system needs to store parts of a file in noncontiguous clusters. This is entirely invisible to users, but it
can slow down the speed at which data is accessed because the disk drive must search through different parts of the
disk to put together a single file.
Internal fragmentation
Due to the rules governing memory allocation, more computer memory is sometimes allocated than is needed. For
example, memory can only be provided to programs in chunks divisible by 4, 8 or 16, and as a result if a program
requests perhaps 23 bytes, it will actually get a chunk of 24. When this happens, the excess memory goes to waste.
In this scenario, the unusable memory is contained within an allocated region. This arrangement termed fixed
partitions suffers from inefficient memory use - any process, no matter how small, occupies an entire partition.
This waste is called internal fragmentation.

2
Unit III

External fragmentation
External fragmentation arises when free memory is separated into small blocks and is interspersed by allocated
memory. It is a weakness of certain storage allocation algorithms, when they fail to order memory used by
programs efficiently. The result is that, although free storage is available, it is effectively unusable because it is
divided into pieces that are too small individually to satisfy the demands of the application. The term "external"
refers to the fact that the unusable storage is outside the allocated regions.
For example, consider a situation wherein a program allocates 3 continuous blocks of memory and then frees the
middle block. The memory allocator can use this free block of memory for future allocations. However, it cannot
use this block if the memory to be allocated is larger in size than this free block.
External fragmentation also occurs in file systems as many files of different sizes are created, change size, and are
deleted. The effect is even worse if a file which is divided into many small pieces is deleted, because this leaves
similarly small regions of free spaces.

Paging: Paging is a technique which allow a single process to be placed in non contiguous address spaces. In the
primary memory allocation using paging stores a process in a way that the user view gives an impression that the
process holding adjacent address spaces but in actual a single process is scattered across a no. of block which are
non adjacent and are spread across the complete primary memory.
under this approach each user process is divided into number of block known as pages. further the physical

memory is also divided into a number of equal sized block known as frame. each pages is loaded into a specified
frame a specified data structure known as the page map table is maintained which keeps track of pages and
respective frames in which the former are loaded. each process has a page map table associated with it.
3
Unit III
any address spaces generated by the program is seen as page no. and an offset(a mathematical function translate
the address spaces into page no. and any offset).offset need not to be map because the page size and the frame size
is similar. the page no. and frame no. are a kept track of by the page map table thus a logical address page P offset
D is mapped to physical address frame f offset d. the corresponsive of p and f is determined by the mathematical
function and represented by the page map table.
external fragmentation is null but internal fragmentation still remains as the size of a is seldom a multiple of the
page size.
Segmentation: is non contiguous allocation method under which the logical address space is broken into a no. of
semantically defined logical unit known as segments.
these segments are loaded into physical memory in non contiguous fashion. to keep a track of various segment and
their respective address in the physical memory a segment table is maintained.
each segment represent a semantically defined and different logical unit of the user processes.
A program is a collection of segments. A segment is a logical unit such as:
main program, procedure, function, object, local variables, global variables, common block, stack, arrays

Logical address consists of a two tuple: <segment-number, offset>,


Segment table – maps two-dimensional physical addresses; each table entry has:
1. base – contains the starting physical address where the segments reside in memory.
2. limit – specifies the length of the segment.
Segment-table base register (STBR) points to the segment table’s location in memory.
Segment-table length register (STLR) indicates number of segments used by a program;
segment number s is legal if s < STLR.
A segmentation example is shown in the following diagram
Segmentation Hardware

4
Unit III
Example of Segmentation

Advantage:
1.internal fragmentation is null.
2. data structure which shrink and grow can be be implemented more efficiently.
3.linking of subroutine is greatly simplified because for all the segment offset begin with zero.
4.sharing of library routine is also simplified as a single segment can be referred to a no. of processes in their
respective segment table.
5.prtection mechanism which is best suitable table for each individual segment can be deployed as the segment
hold non contiguous address spaces in isolation their protection concern are greatly reduced.
disadvantage:
1.external fragmentation arising as admits two segments may be some of the space which is not big enough to
accommodate any other segments.
2.A great degree of user environment.
Difference between Paging and Segmentation
The difference between paging and segmentation is that with paging the system allocates memory and with
segmentation memory is allocated to a specific function. Paging has no separate protection and segmentation does.
Paging has no shared code and segmentation does.
Demand paging: not all of a process's virtual address space needs to be loaded in main memory at any given time.
Each page can be either:
-In memory (physical page frame)
-On disk (backing store)
In virtual memory systems, demand paging is a type of swapping in which pages of data are not copied from disk
to RAM until they are needed. In contrast, some virtual memory systems use anticipatory paging, in which the
operating system attempts to anticipate which data will be needed next and copies it to RAM before it is actually
required.
As there is much less physical memory than virtual memory the operating system must be careful that it does not
use the physical memory inefficiently. One way to save physical memory is to only load virtual pages that are
currently being used by the executing program. For example, a database program may be run to query a database.
In this case not all of the database needs to be loaded into memory, just those data records that are being examined.
Also, if the database query is a search query then the it does not make sense to load the code from the database
program that deals with adding new records. This technique of only loading virtual pages into memory as they are
accessed is known as demand paging.
Advantage:
 Only loads pages that are demanded by the executing process.
 As there is more space in main memory, more processes can be loaded reducing switching time which utilizes
large amounts of resources.
5
Unit III
 Less loading latency occurs at program startup, as less information is accessed from secondary storage and less
information is brought into main memory.
 As main memory is expensive compared to secondary memory, this technique helps significantly reduce the
bill of material (BOM) cost in smart phones for example. Symbian OS had this feature.
Disadvantage:
 Individual programs face extra latency when they access a page for the first time.
 Programs running on low-cost, low-power embedded systems may not have a memory management unit
that supports page replacement.
 Memory management with page replacement algorithms becomes slightly more complex.
 Possible security risks, including vulnerability to timing attacks;
Page Replacement
In a computer operating system that uses paging for virtual memory management, page replacement algorithms
decide which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated.
Paging happens when a page fault occurs and a free page cannot be used to satisfy the allocation, either because
there are none, or because the number of free pages is lower than some threshold.
When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in
from disk), and this involves waiting for I/O completion. This determines the quality of the page replacement
algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the
limited information about accesses to the pages provided by hardware, and tries to guess which pages should be
replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and
processor time) of the algorithm itself.
Page Replacement Algorithms
1. RAND (Random)
 choose any page to replace at random
 assumes the next page to be referenced is random
 can test other algorithms against random page replacement
2. MIN (minimum) or OPT (optimal) :
 Belady's optimal algorithm for the minimum number of page faults
 replace the page that will be referenced furthest in the future or not at all
 problem: we cannot implement it, because we cannot predict the future
 this is the best case
 can use it to compare other algorithms against
3. FIFO (First In, First Out):
 select the page that has been in main memory the longest
 use a queue (data structure)
 problem: although a page has been present for a long time, it may be really useful
 Windows NT and Windows 2000 use this algorithm, as a local page replacement algorithm (described
separately), with the pool method (described in more detail separately)
o create a pool of the pages that have been marked for removal
o manage the pool in the same way as the rest of the pages
o if a new page is needed, take a page from the pool
o if a page in the pool is referenced again before being replaced in memory, it is simply reactivated
o this is relatively efficient
4. LRU (Least Recently Used):
 choose the page that was last referenced the longest time ago
 assumes recent behavior is a good predictor of the near future
 can manage LRU with a list called the LRU stack or the paging stack (data structure)
 in the LRU stack, the first entry describes the page referenced least recently, the last entry describes to the
last page referenced.
 if a page is referenced, move it to the end of the list
 problem: requires updating on every page referenced
 too slow to be used in practice for managing the page table, but many systems use approximations to LRU

6
Unit III
5. NRU (Not Recently Used):
 as an approximation to LRU, select one of the pages that has not been used recently (as opposed to
identifying exactly which one has not been used for the longest amount of time)
 keep one bit called the "used bit" or "reference bit", where 1 => used recently and 0 => not used recently
 variants of this scheme are used in many operating systems, including UNIX and MacIntosh
 most variations use a scan pointer and go through the page frames one by one, in some order, looking for a
page that has not been used recently.
Thrashing:
When referring to a computer, thrashing or disk thrashing is a term used to describe when the hard drive is being
overworked by moving information between the system memory and virtual memory excessively. Thrashing is
often caused when the system does not have enough memory, the system swap file is not properly configured, or
too much is running on the computer and it has low system resources.

When thrashing occurs, a user will notice the computer hard drive always working and a decrease in system
performance. Thrashing is bad on a hard drive because of the amount of work the hard drive has to do and if is left
unfixed will likely cause an early failure of the hard drive.
To resolve hard drive thrashing, a user can do any of the below:
1. Increase the amount of RAM in the computer.
2. Decrease the amount of programs being run on the computer.
3. Adjust the size of the swap file.
Overlays:
In a general computing sense, overlaying means "replacement of a block of stored instructions or data with
another" Overlaying is programming method that allows programs to be larger than the central processing
unit's main memory. An embedded system would normally use overlays because of the limitation of physical
memory, which is internal for a system-on-chip.
The method assumes dividing a program into self-contained object code blocks called overlays. The size of an
overlay is limited according to memory constraints. The place in memory where an overlay is loaded is called an
overlay region or destination region. Although the idea is to reuse the same block of main memory, multiple region
systems could be defined. The regions can be different sizes. An overlay manager, possibly part of the operating
system, will load the required overlay from external memory into its destination region in order to be used.
Some linkers provide support for overlays
Overlay programming requires the program designer to be very aware of the size of each part of the program. This
meant using programming languages or assemblers that allow the designer or architect control over the size of the
program and more importantly, the size of the overlay. This constraint added many design difficulties that do not
exist with virtual memory.

7
Unit III

Virtual Memory:
Virtual memory is a technique that allows the execution of processes which are not completely available in
memory. The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual
memory is the separation of user logical memory from physical memory.
This separation allows an extremely large virtual memory to be provided for programmers when only a smaller
physical memory is available. Following are the situations, when entire program is not required to be loaded fully
in main memory.
 User written error handling routines are used only when an error occured in the data or computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even though only a small amount of the table is
actually used.
 The ability to execute a program that is only partially in memory would counter many benefits.
 Less number of I/O would be needed to load or swap each user program into memory.
 A program would no longer be constrained by the amount of physical memory that is available.
 Each user program could take less physical memory, more programs could be run the same time, with a
corresponding increase in CPU utilization and throughput.

Virtual memory is commonly implemented by demand paging. It can also be implemented in a segmentation
system. Demand segmentation can also be used to provide virtual memory.

8
Unit III

Demand Paging
A demand paging system is quite similar to a paging system with swapping. When we want to execute a process,
we swap it into memory. Rather than swapping the entire process into memory, however, we use a lazy swapper
called pager.
When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out
again. Instead of swapping in a whole process, the pager brings only those necessary pages into memory. Thus, it
avoids reading into memory pages that will not be used in anyway, decreasing the swap time and the amount of
physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those pages that are on the
disk using the valid-invalid bit scheme. Where valid and invalid pages can be checked by checking the bit.
Marking a page will have no effect if the process never attempts to access the page. While the process executes and
accesses pages that are memory resident, execution proceeds normally.

Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's failure to
bring the desired page into memory. But page fault can be handled as following

Step Description
Check an internal table for this process, to determine whether the reference
Step 1
was a valid or it was an invalid memory access.
If the reference was invalid, terminate the process. If it was valid, but page
Step 2
have not yet brought in, page in the latter.
Step 3 Find a free frame.
Schedule a disk operation to read the desired page into the newly allocated
Step 4
frame.
Step 5 When the disk read is complete, modify the internal table kept with the

9
Unit III

process and the page table to indicate that the page is now in memory.
Restart the instruction that was interrupted by the illegal address trap. The
process can now access the page as though it had always been in memory.
Step 6
Therefore, the operating system reads the desired page into memory and
restarts the process as though the page had always been in memory.
ADVANTAGES
Following are the advantages of Demand Paging
 Large virtual memory.
 More efficient use of memory.
 Unconstrained multiprogramming. There is no limit on degree of multiprogramming.
DISADVANTAGES
Following are the disadvantages of Demand Paging
 Number of tables and amount of processor overhead for handling page interrupts are greater than in the
case of the simple paged management techniques.
 Due to the lack of an explicit constraints on a jobs address space size.

Segmentation is a Memory Management technique in which memory is divided into variable sized
chunks which can be allocated to processes. Each chunk is called a segment.

10

You might also like