unit 3 OS
unit 3 OS
Paging
A computer can address more memory than the amount physically installed on the system.
This extra memory is actually called virtual memory and it is a section of a hard that's set up to
emulate the computer's RAM. Paging technique plays an important role in implementing virtual
memory.
Paging is a memory management technique in which process address space is broken into blocks of
the same size called pages (size is power of 2, between 512 bytes and 8192 bytes). The size of the
process is measured in the number of pages.
Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum utilization
of the main memory and to avoid external fragmentation.
Address Translation
Page address is called logical address and represented by page number and the offset.
Logical Address = Page number + page offset
Frame address is called physical address and represented by a frame number and the offset.
Physical Address = Frame number + page offset
A data structure called page map table is used to keep track of the relation between a page of a
process to a frame in physical memory.
When the system allocates a frame to any page, it translates this logical address into a
physical address and creates entry into the page table to be used throughout execution of the
program.
When a process is to be executed, its corresponding pages are loaded into any available memory
frames. Suppose you have a program of 8Kb but your memory can accommodate only 5Kb at a
given point in time, then the paging concept will come into picture. When a computer runs out of
RAM, the operating system (OS) will move idle or unwanted pages of memory to secondary
memory to free up RAM for other processes and brings them back when needed by the program.
This process continues during the whole execution of the program where the OS keeps removing
idle pages from the main memory and write them onto the secondary memory and bring them back
when required by the program.
Advantages and Disadvantages of Paging
Here is a list of advantages and disadvantages of paging −
Paging’s reduces external fragmentation, but still suffer from internal fragmentation.
Paging is simple to implement and assumed as an efficient memory management technique.
Due to equal size of the pages and frames, swapping becomes very easy.
Page table requires extra memory space, so may not be good for a system having small RAM.
Segmentation
A demand paging system is quite similar to a paging system with swapping where processes reside
in secondary memory and pages are loaded only on demand, not in advance. When a context switch
occurs, the operating system does not copy any of the old program’s pages out to the disk or any of
the new program’s pages into the main memory Instead, it just begins executing the new program
after loading the first page and fetches that program’s pages as they are referenced.
While executing a program, if the program references a page which is not available in the main
memory because it was swapped out a little ago, the processor treats this invalid memory reference
as a page fault and transfers control from the program to the operating system to demand the page
back into the memory.
Advantages
Following are the advantages of Demand Paging −
Page replacement algorithms are the techniques using which an Operating System decides which
memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging
happens whenever a page fault occurs and a free page cannot be used for allocation purpose
accounting to reason that pages are not available or the number of free pages is lower than
required pages.
When the page that was selected for replacement and was paged out, is referenced again, it has to
read in from disk, and this requires for I/O completion. This process determines the quality of the
page replacement algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
A page replacement algorithm looks at the limited information about accessing the pages provided
by hardware, and tries to select which pages should be replaced to minimize the total number of
page misses, while balancing it with the costs of primary storage and processor time of the
algorithm itself. There are many different page replacement algorithms. We evaluate an algorithm
by running it on a particular string of memory reference and computing the number of page faults,
Reference String
The string of memory references is called reference string. Reference strings are generated
artificially or by tracing a given system and recording the address of each memory reference. The
latter choice produces a large number of data, where we note two things.
For a given page size, we need to consider only the page number, not the entire address.
If we have a reference to a page p, then any immediately following references to page p will
never cause a page fault. Page p will be in memory after the first reference; the immediately
following references will not fault.
For example, consider the following sequence of addresses − 123,215,600,1234,76,96
If page size is 100, then the reference string is 1,2,6,12,0,0
Oldest page in main memory is the one which will be selected for replacement.
Easy to implement, keep a list, replace pages from the tail and add new pages at the head.
2. Optimal Page algorithm
An optimal page-replacement algorithm has the lowest page-fault rate of all algorithms. An
optimal page-replacement algorithm exists, and has been called OPT or MIN.
Replace the page that will not be used for the longest period of time. Use the time when a
page is to be used.
Page which has not been used for the longest time in main memory is the one which will be
selected for replacement.
Easy to implement, keep a list, replace pages by looking back into time.
4. Page Buffering algorithm
The page with the smallest count is the one which will be selected for replacement.
This algorithm suffers from the situation in which a page is used heavily during the initial
phase of a process, but then is never used again.
This algorithm is based on the argument that the page with the smallest count was probably
just brought in and has yet to be used.
Contiguous Allocation
A contiguous memory allocation is a memory management technique where
whenever there is a request by the user process for the memory, a single section of the
contiguous memory block is given to that process according to its requirement.
It is achieved by dividing the memory into fixed-sized partitions or variable-sized
partitions. If the blocks are allocated to the file in such a way that all the logical blocks of the file
get the contiguous physical block in the hard disk then such allocation scheme is known as
contiguous allocation. There are three files in the directory. The starting block and the length of
each file are mentioned in the table. We can check in the table that the contiguous blocks are
assigned to each file as per its need.
Advantages
1. It is simple to implement.
2. We will get Excellent read performance.
3. Supports Random Access into files.
Disadvantages
In paging, the program is divided into In segmentation, the program is divided into
1. fixed or mounted size pages. variable size sections.
3. Page size is determined by hardware. Here, the section size is given by the user.
It is faster in comparison to
4. segmentation. Segmentation is slow.
In paging, the logical address is split into Here, the logical address is split into section
6. a page number and page offset. number and section offset.
In paging, the processor needs the page In segmentation, the processor uses segment
number, and offset to calculate the number, and offset to calculate the full
11. absolute address. address.
14. This protection is hard to apply. Easy to apply for protection in segmentation.
The size of the page needs always be There is no constraint on the size of
15. equal to the size of frames. segments.
Pure segmentation is not very popular and not being used in many of the operating systems.
However, Segmentation can be combined with Paging to get the best features out of both the
techniques. In Segmented Paging, the main memory is divided into variable size segments which
are further divided into fixed size pages.
Each Page table contains the various information about every page of the segment. The Segment
Table contains the information about every segment. Each segment table entry points to a page
table entry and every page table entry is mapped to one of the page within a segment.
The CPU generates a logical address which is divided into two parts: Segment Number and
Segment Offset. The Segment Offset must be less than the segment limit. Offset is further divided
into Page number and Page Offset. To map the exact page number in the page table, the page
number is added into the page table base. The actual frame number with the page offset is mapped
to the main memory to get the desired word in the page of the certain segment of the process.
Advantages of Segmented Paging
Page Fault
A page fault occurs when a program attempts to access data or code that is in its address space,
but is not currently located in the system RAM.
The computer hardware traps to the kernel and program counter (PC) is saved on the stack.
Current instruction state information is saved in CPU registers.
An assembly program is started to save the general registers and other volatile information to
keep the OS from destroying it.
Operating system finds that a page fault has occurred and tries to find out which virtual page
is needed. Sometimes hardware register contains this required information. If not, the
operating system must retrieve PC, fetch instruction and find out what it was doing when the
fault occurred.
Once virtual address caused page fault is known, system checks to see if address is valid and
checks if there is no protection access problem.
If the virtual address is valid, the system checks to see if a page frame is free. If no frames are
free, the page replacement algorithm is run to remove a page.
If frame selected is dirty, page is scheduled for transfer to disk, context switch takes place,
fault process is suspended and another process is made to run until disk transfer is
completed.
As soon as page frame is clean, operating system looks up disk address where needed page is,
schedules disk operation to bring it in.
When disk interrupt indicates page has arrived, page tables are updated to reflect its position,
and frame marked as being in normal state.
Faulting instruction is backed up to state it had when it began and PC is reset. Faulting is
scheduled, operating system returns to routine that called it.
Assembly Routine reloads register and other state information, returns to user space to
continue execution.
Example, if any process does not have the number of frames that it needs to support pages in active
use then it will quickly page fault. And at this point, the process must replace some pages. As all the
pages of the process are actively in use, it must replace a page that will be needed again right away.
Consequently, the process will quickly fault again, and again, and again, replacing pages that it must
bring back in immediately. This high paging activity by a process is called thrashing.
During thrashing, the CPU spends less time on some actual productive work spend more time
swapping.
Figure: Thrashing
Causes of Thrashing
Thrashing affects the performance of execution in the Operating system. Also, thrashing results in
severe performance problems in the Operating system.
When the utilization of CPU is low, then the process scheduling mechanism tries to load many
processes into the memory at the same time due to which degree of Multiprogramming can be
increased. Now in this situation, there are more processes in the memory as compared to the
available number of frames in the memory. Allocation of the limited amount of frames to each
process.
Whenever any process with high priority arrives in the memory and if the frame is not freely
available at that time then the other process that has occupied the frame is residing in the frame
will move to secondary storage and after that this free frame will be allocated to higher priority
process.