Unit4 OS Updated
Unit4 OS Updated
BCS401
Unit 4
OS Unit 4 1
Subject Evaluation
OS Unit 4 2
Subject Syllabus
OS Unit 4 3
Syllabus For Unit-4
Memory Allocation: Allocation Strategies First Fit, Best Fit, and Worst Fit, Paging,
Segmentation, Paged Segmentation, Virtual Memory Concepts, Demand Paging,
Performance of Demand Paging, Page Replacement Algorithms: FIFO,LRU, Optimal
and LFU, Belady’s Anomaly, Thrashing, Cache Memory Organization, Locality of
Reference.
OS Unit 4 4
Unit-4 Memory Management (Objective)
After going through this unit, you should be able to:
• To provide a detailed description of various ways of organizing memory hardware.
• To explores various techniques of allocating memory to processes.
• To understand memory management functions such as allocation of memory to the processes, de-allocate
memory from the processes and implement allocation strategies such as First Fit, Best Fit and Worst Fit.
• To understand the virtual memory concepts and evaluate the performance of virtual memory with the help of
effective memory access time.
OS Unit 4 5
Content
OS Unit 4 6
Introduction
Memory management keeps track of the status of each memory location, whether it is
allocated or free.
It allocates the memory dynamically to the programs at their request and frees it for reuse
when it is no longer needed.
OS Unit 4 7
Base and Limit Registers
• A pair of base and limit registers define the logical address space
• CPU must check every memory access generated in user mode to be sure it is between
base and limit for that user
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg
Gagne Page No. -316
OS Unit 4 8
Hardware Address Protection (CO4)
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin,
and Greg Gagne Page No. -317
OS Unit 4 9
Binding of Instructions and Data to Memory
Address binding of instructions and data to memory addresses can happen at three different
stages
Compile time: If memory location known a priori, absolute code can
be generated; must recompile code if starting location changes
Load time: Must generate relocatable code if memory location is not
known at compile time
Execution time: Binding delayed until run time if the process can be
moved during its execution from one memory segment to another
Need hardware support for address maps (e.g., base and limit registers).
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg
Gagne Page No. -318-319.
OS Unit 4 10
Multistep Processing of a User Program
12
OS Unit 4
Memory-Management Unit (MMU)
Hardware device that at run time maps virtual to physical address
Many methods are possible.
To start, consider simple scheme where the value in the relocation register is added to every
address generated by a user process at the time it is sent to memory
-The user program deals with logical addresses; it never sees the real physical addresses
-Execution-time binding occurs when reference is made to location in memory
-Logical address bound to physical addresses
For Video Lecture:-
http://www.infocobuild.com/education/audio-video-courses/computer-science/IntroToOperatingSyste
ms-IIT-Madras/lecture-07.html
OS Unit 4 13
Dynamic relocation using a relocation register
Routine is not loaded until it is called
Better memory-space utilization; unused routine is never loaded
All routines kept on disk in relocatable load format
Useful when large amounts of code are needed to handle infrequently occurring cases
No special support from the operating system is required
Implemented through program design
OS can help by providing libraries to implement dynamic loading
14
OS Unit 4
Dynamic relocation using a relocation register
15
OS Unit 4
Linking & Loading
Types of Linking
A.Static linking
B.Dynamic linking
Types of Loading
A. Static Loading
B. Dynamic Loading
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne Page
No. -320-321.
OS Unit 4 16
Linking & Loading
• The choice between Static or Dynamic Loading is to be made at the time of computer
program being developed. If you have to load your program statically, then at the time of
compilation, the complete programs will be compiled and linked without leaving any
external program or module dependency. The linker combines the object program with
other necessary object modules into an absolute program, which also includes logical
addresses.
• If you are writing a Dynamically loaded program, then your compiler will compile the
program and for all the modules which you want to include dynamically, only references
will be provided and rest of the work will be done at the time of execution.
• At the time of loading, with static loading, the absolute program (and data) is loaded into
memory in order for execution to start.
• If you are using dynamic loading, dynamic routines of the library are stored on a disk in
relocatable form and are loaded into memory only when they are needed by the program.
OS Unit 4 17
Linking & Loading
• As explained above, when static linking is used, the linker combines all other modules
needed by a program into a single executable program to avoid any runtime dependency.
• When dynamic linking is used, it is not required to link the actual module or library with the
program, rather a reference to the dynamic module is provided at the time of compilation
and linking. Dynamic Link Libraries (DLL) in Windows and Shared Objects in Unix are good
examples of dynamic libraries.
OS Unit 4 18
Swapping
A process can be swapped temporarily out of memory to a backing store, and
then brought back into memory for continued execution
Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images
OS Unit 4 19
Schematic View of Swapping
OS Unit 4 20
Contiguous Allocation
Main memory must support both OS and user processes
Limited resource, must allocate efficiently
Contiguous allocation is one early method
Main memory usually into two partitions:
– Resident operating system, usually held in low memory with interrupt vector
– User processes then held in high memory
– Each process contained in single contiguous section of memory
OS Unit 4 21
Contiguous Allocation (Cont.)
Relocation registers used to protect user processes from each other, and from
changing operating-system code and data
– Base register contains value of smallest physical address
– Limit register contains range of logical addresses – each logical address must
be less than the limit register
– MMU maps logical address dynamically
– Can then allow actions such as kernel code being transient and kernel
changing size
OS Unit 4 22
Hardware Support for Relocation and Limit Registers
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and
Greg Gagne Page No. -325.
OS Unit 4 23
Multiple-partition allocation
Multiple-partition allocation
Hole – block of available memory; holes of various size are scattered throughout memory
When a process arrives, it is allocated memory from a hole large enough to accommodate it
OS Unit 4 24
Multiple-partition allocation
Process exiting frees its partition, adjacent free partitions combined
Operating system maintains information about:
a) allocated partitions
b) free partitions (hole)
OS Unit 4 25
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?
Placement Algorithm
• Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by
size
– Produces the smallest leftover hole
• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole
Note:- First-fit and best-fit are better than worst-fit in terms of speed and storage utilization
OS Unit 4 26
Fragmentation (CO4)
• Fragmentation is an unwanted problem where the memory blocks cannot be
allocated to the processes due to their small size and the blocks remain unused.
It can also be understood as when the processes are loaded and removed from
the memory they create free space or hole in the memory and these small
blocks cannot be allocated to new upcoming processes and results in
inefficient use of memory. Basically, there are two types of fragmentation:
• Internal Fragmentation
• External Fragmentation
OS Unit 4 27
Types of Fragmentation
Internal Fragmentation
• In this fragmentation, the process is allocated a memory block of size more
than the size of that process. Due to this some part of the memory is left
unused and this cause internal fragmentation.
• Example: Suppose there is fixed partitioning (i.e. the memory blocks are of
fixed sizes) is used for memory allocation in RAM. These sizes are 2MB,
4MB, 4MB, 8MB. Some part of this RAM is occupied by the Operating
System (OS).
• Now, suppose a process P1 of size 3MB comes and it gets memory block of
size 4MB. So, the 1MB that is free in this block is wasted and this space can’t
be utilized for allocating memory to some other process. This is
called internal fragmentation.
OS Unit 4 28
Types of Fragmentation (CO4)
OS Unit 4 29
Types of Fragmentation
External Fragmentation
• In this fragmentation, although we have total space available that is needed by
a process still we are not able to put that process in the memory because that
space is not contiguous. This is called external fragmentation.
• Example: Suppose in the above example, if three new processes P2, P3, and
P4 come of sizes 2MB, 3MB, and 6MB respectively. Now, these processes get
memory blocks of size 2MB, 4MB and 8MB respectively allocated.
• So, now if we closely analyze this situation then process P3 (unused 1MB)and
P4(unused 2MB) are again causing internal fragmentation. So, a total of 4MB
(1MB (due to process P1) + 1MB (due to process P3) + 2MB (due to process
P4)) is unused due to internal fragmentation.
• Now, suppose a new process of 4 MB comes. Though we have a total space
of 4MB still we can’t allocate this memory to the process. This is
called external fragmentation.
OS Unit 4 30
Types of Fragmentation
OS Unit 4 31
Fragmentation
How to remove internal fragmentation?
• This problem is occurring because we have fixed the sizes of the memory blocks. This
problem can be removed if we use dynamic partitioning for allocating space to the process. In
dynamic partitioning, the process is allocated only that much amount of space which is
required by the process. So, there is no internal fragmentation.
How to remove external fragmentation?
• This problem is occurring because we are allocating memory continuously to the processes.
So, if we remove this condition external fragmentation can be reduced. This is what is done
in paging & segmentation(non-contiguous memory allocation techniques) where memory is
allocated non-contiguously to the processes.
• Another way to remove external fragmentation is compaction. When dynamic partitioning is
used for memory allocation then external fragmentation can be reduced by merging all the
free memory together in one large block. This technique is also called defragmentation. This
larger block of memory is then used for allocating space according to the needs of the new
processes.
OS Unit 4 32
Paging
Physical address space of a process can be noncontiguous; process is allocated physical memory
whenever the latter is available
-Avoids external fragmentation
-Avoids problem of varying sized memory chunks
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne
Page No. -328-332.
OS Unit 4 33
Address Translation Scheme
Address generated by CPU is divided into:
Page number (p) – used as an index into a page table which contains base address of each page
in physical memory
Page offset (d) – combined with base address to define the physical memory address that is sent
to the memory unit
page number page offset
p d
m -n n
OS Unit 4 34
Paging Hardware (CO4)
OS Unit 4 35
Implementation of Page Table
Page table is kept in main memory
Page-table base register (PTBR) points to the page table
Page-table length register (PTLR) indicates size of the page table
In this scheme every data/instruction access requires two memory accesses
-One for the page table and one for the data / instruction
The two memory access problem can be solved by the use of a special fast-lookup hardware
cache called associative memory or translation look-aside buffers (TLBs)
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne
Page No. -333
OS Unit 4 36
Associative Memory
OS Unit 4 37
Paging Hardware With TLB
OS Unit 4 38
Advantages & Disadvantages of paging
Advantages :
OS Unit 4 39
Structure of the Page Table
i. Hierarchical Paging
ii. Hashed Page Tables
iii. Inverted Page Tables
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne Page
No. -337-341.
OS Unit 4 40
Improvement in effective memory access time
Topic objective: To compute the effective memory access time of cache memory.
=0.9*(100+20) + (1-0.9)*(2*100+20)
=0.9*120+0.1*220=108+22=130 ns
OS Unit 4 41
Inverted Page Table(IPT)
OS Unit 4 42
Inverted Page Table(IPT)
Topic Objective: To explain the translation of logical address into physical address using IPT
The size of IPT is related to the size of the physical memory, not the size of logical address space. Since, IPT is
not process specific and it does not need switching during context switching of the processes. IPT contains
process id along with page no. for each frame in the physical memory. IPT is system wide not per process.
OS Unit 4 43
Inverted Page Table(IPT)
A logical address generated by CPU contains process-id(pid), Page no.(p) and page offset(d). A search is carried
out in IPT to find a match for the process id and the page number. The offset of the matching slot in the IPT,
gives the frame number, where the desired page is residing. The frame number combined with the offset d,
gives the intended physical address.
OS Unit 4 44
Advantages of IPT over conventional Page Table
OS Unit 4 45
Hashed Page Table
Topic Objective: To explain the mechanism of Hashed Page Table with the help of following diagram
OS Unit 4 46
Hashed Page Table
Topic objective: Translation of logical address into physical address using Hashed Page Table.
Whenever, logical address is generated, a hashing function is applied to the page number p, to generate an
index i.
i=p%M;
The index value i is used to index into the page table. Each entry in the page table is a pointer to a linked list.
OS Unit 4 47
Multi-level Paging
Topic Objective: To explain the mechanism of multi-level paging with the help of following
diagram
OS Unit 4 48
Multi-level Paging
Topic Objective: To explain the steps for translating logical address into physical address in multi-level paging
scheme.
The page table is split into multiple levels. For example, in a two level page table, a logical address would
comprise of the following fields:
(a)Page number p1, for indexing into outer page table.
(b)Page number p2, for indexing into inner page table.
(c)Offset or displacement d that is used to index into the selected frame, to obtain the desired operand
OS Unit 4 49
Multi-level Paging
The page table is split into multiple levels. For example, in a two level page table, a logical address would
comprise of the following fields:
(a)Page number p1, for indexing into outer page table.
(b)Page number p2, for indexing into inner page table.
(c)Offset or displacement d that is used to index into the selected frame, to obtain the desired operand
OS Unit 4 50
Advantage of Multi-level Paging
OS Unit 4 51
Disadvantage of Multi-level Paging
OS Unit 4 52
Memory Protection in Paging
Memory protection implemented by associating protection bit with each frame to
indicate if read-only or read-write access is allowed
-Can also add more bits to indicate page execute-only, and so on
Valid-invalid bit attached to each entry in the page table:
“valid” indicates that the associated page is in the process’ logical address space,
and is thus a legal page
“invalid” indicates that the page is not in the process’ logical address space
Or use page-table length register (PTLR)
Any violations result in a trap to the kernel
OS Unit 4 53
Valid (v) or Invalid (i) Bit In A Page Table
OS Unit 4 54
Segmentation
Memory-management scheme that supports user’s view of memory.
A program is a collection of segments.
A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne
Page No. -343-345.
OS Unit 4 55
Segmentation Architecture
Logical address consists of a two tuple:
<segment-number, offset>,
Segment table – maps two-dimensional physical addresses; each table entry has:
base – contains the starting physical address where the segments reside in memory
limit – specifies the length of the segment
Segment-table base register (STBR) points to the segment table’s location in memory
Segment-table length register (STLR) indicates number of segments used by a program;
segment number s is legal if s < STLR
Video Lecture:-https://www.youtube.com/watch?v=xD5PB_g1rIE
OS Unit 4 56
Segmentation Architecture (Cont.)
Protection
With each entry in segment table associate:
-validation bit = 0 illegal segment
-read/write/execute privileges
Protection bits associated with segments; code sharing occurs at segment level
Since segments vary in length, memory allocation is a dynamic storage-allocation problem
A segmentation example is shown in the following diagram
OS Unit 4 57
Segmentation Hardware
OS Unit 4 58
Segmentation Hardware
OS Unit 4 59
Advantages & Disadvantages of Segmentation
Advantages :
No internal fragmentation
Average segment size is larger than the actual page size.
It is easier to relocate segments than entire address space.
The segment table is of lesser size as compared to the page table in paging.
Less overhead.
Disadvantages :
It can have external fragmentation.
Costly memory management algorithms.
It is difficult to allocate contiguous memory to variable sized partition.
Segments of unequal size not suited well for swapping.
OS Unit 4 60
Segmentation with paging
In segmented paging,
Process is first divided into segments and then each segment is divided into pages.
These pages are then stored in the frames of main memory.
A page table exists for each segment that keeps track of the frames storing the pages of that
segment.
Each page in page table occupies one frame in the main memory.
OS Unit 4 61
Segmentation with paging
Number of entries in the page table of a segment = Number of pages that segment is divided.
A segment table exists that keeps track of the frames storing the page tables of segments.
Number of entries in the segment table of a process = Number of segments that process is
divided.
The base address of the segment table is stored in the segment table base register.
OS Unit 4 62
Segmentation With paging
CPU generates a logical address consisting of three parts-
Segment Number
Page Number
Page Offset
OS Unit 4 63
Segmentation with Paging Hardware
OS Unit 4 64
Segmentation with Paging Hardware
OS Unit 4 65
Segmentation with Paging Hardware
OS Unit 4 66
Conti…
Advantages-
• Segment table contains only one entry corresponding to each segment.
• It reduces memory usage.
• The size of page table is limited by the segment size.
• It solves the problem of external fragmentation.
Disadvantages-
• Segmented paging suffers from internal fragmentation.
• The complexity level is much higher as compared to paging.
OS Unit 4 67
Virtual Memory (CO4)
• Virtual Memory is a storage mechanism which offers user an illusion of having a
very big main memory. It is done by treating a part of secondary memory as the main
memory. In Virtual memory, the user can store processes with a bigger size than the
available main memory.
• Therefore, instead of loading one long process in the main memory, the OS loads the
various parts of more than one process in the main memory. Virtual memory is
mostly implemented with demand paging and demand segmentation.
Reasons for using virtual memory:
• Whenever your computer doesn’t have space in the physical memory it writes what it
needs to remember to the hard disk in a swap file as virtual memory.
• If a computer running Windows needs more memory/RAM, then installed in the
system, it uses a small portion of the hard drive for this purpose.
OS Unit 4 68
Working of Virtual Memory (Cont.)
• In the modern world, virtual memory has become quite common these days. It is
used whenever some pages require to be loaded in the main memory for the
execution, and the memory is not available for those many pages.
• So, in that case, instead of preventing pages from entering in the main memory, the
OS searches for the RAM space that are minimum used in the recent times or that
are not referenced into the secondary memory to make the space for the new
pages in the main memory.
OS Unit 4 69
(Cont.)
Virtual address space – logical view of how process is stored in memory
– Usually start at address 0, contiguous addresses until end of space
– Meanwhile, physical memory organized in page frames
– MMU must map logical to physical
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and
Greg Gagne Page No. -361
OS Unit 4 70
Virtual-address Space
OS Unit 4 71
Virtual-address Space
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin,
and Greg Gagne Page No. -361-362
OS Unit 4 72
Demand Paging (CO4)
• A demand paging mechanism is very much similar to a paging system with swapping
where processes stored in the secondary memory and pages are loaded only on
demand, not in advance.
• So, when a context switch occurs, the OS never copy any of the old program’s pages
from the disk or any of the new program’s pages into the main memory. Instead, it
will start executing the new program after loading the first page and fetches the
program’s pages, which are referenced.
• During the program execution, if the program references a page that may not be
available in the main memory because it was swapped, then the processor considers it
as an invalid memory reference. That’s because the page fault and transfers send
control back from the program to the OS, which demands to store page back into the
memory.
OS Unit 4 73
Demand Paging
OS Unit 4 74
Valid-Invalid Bit
With each page table entry a valid–invalid bit is associated
(v in-memory – memory resident, i not-in-memory)
Initially valid–invalid bit is set to i on all entries
During MMU address translation, if valid–invalid bit in page table entry is i page fault
OS Unit 4 75
Conti….. (CO4)
Figure:- Page table when some pages are not in main memory.
OS Unit 4 76
Page Fault
If there is a reference to a page, first reference to that page will trap to operating system:
page fault
1. Operating system looks at another table to decide:
Invalid reference abort
Just not in memory
2. Find free frame
3. Swap page into frame via scheduled disk operation
4. Reset tables to indicate page now in memory
Set validation bit = v
5. Restart the instruction that caused the page fault
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne
Page No. -362-364.
OS Unit 4 79
Page Replacement
Prevent over-allocation of memory by modifying page-fault service routine to include page
replacement
Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to
disk
Page replacement completes separation between logical memory and physical memory – large
virtual memory can be provided on a smaller physical memory
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne
Page No. -369-377
OS Unit 4 80
Need For Page Replacement.
OS Unit 4 82
Page Replacement
OS Unit 4 84
First-In-First-Out (FIFO) Algorithm
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
3 frames (3 pages can be in memory at a time per process)
15 page faults
Can vary by reference string: consider 1,2,3,4,1,2,5,1,2,3,4,5
Adding more frames can cause more page faults!
Belady’s Anomaly: It is the phenomenon in which increasing the number of page frames
results in an increase in the number of page faults for certain memory access patterns.
OS Unit 4 85
Optimal Algorithm
Replace page that will not be used for longest period of time
– 9 is optimal for the example
How do you know this?
– Can’t read the future
Used for measuring how well your algorithm performs
9 page faults
OS Unit 4 86
Least Recently Used (LRU) Algorithm
12 page faults
Keep a counter of the number of references that have been made to each page
Not common.
Least Frequently Used (LFU) Algorithm: replaces page with smallest count
Most Frequently Used (MFU) Algorithm: based on the argument that the page with the
smallest count was probably just brought in and has yet to be used
• Local replacement – each process selects from only its own set of allocated
frames
– More consistent per-process performance
– But possibly underutilized memory
OS Unit 4 89
Thrashing
• In case, if the page fault and swapping happens very frequently at a higher rate, then the
operating system has to spend more time swapping these pages. This state in the operating
system is termed as thrashing. Because of thrashing the CPU utilization is going to be reduced.
• Let's understand by an example, if any process does not have the number of frames that it needs
to support pages in active use then it will quickly page fault. And at this point, the process must
replace some pages. As all the pages of the process are actively in use, it must replace a page
that will be needed again right away. Consequently, the process will quickly fault again, and
again, and again, replacing pages that it must bring back in immediately. This high paging
activity by a process is called thrashing.
• During thrashing, the CPU spends less time on some actual productive work and spend more
time on swapping.
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne
Page No. -386-391.
OS Unit 4 90
Thrashing (Cont.)
OS Unit 4 91
Cause of thrashing
• Thrashing affects the performance of execution in the Operating system. Also, thrashing results in severe
performance problems in the Operating system.
• When the utilization of CPU is low, then the process scheduling mechanism tries to load many processes
into the memory at the same time due to which degree of Multiprogramming can be increased. Now in
this situation, there are more processes in the memory as compared to the available number of frames in
the memory. Allocation of the limited amount of frames to each process.
• Whenever any process with high priority arrives in the memory and if the frame is not freely available at
that time then the other process that has occupied the frame is residing in the frame will move to
secondary storage and after that this free frame will be allocated to higher priority process.
• We can also say that as soon as the memory fills up, the process starts spending a lot of time for the
required pages to be swapped in. Again the utilization of the CPU becomes low because most of the
processes are waiting for pages.
• Thus a high degree of multiprogramming and lack of frames are two main causes of thrashing in the
Operating system.
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg Gagne Page No. -386-
391.
OS Unit 4 92
Locality Model
• A locality is a set of pages that are actively used together. The locality model states that
as a process executes, it moves from one locality to another. A program is generally
composed of several different localities which may overlap.
• For example, when a function is called, it defines a new locality where memory
references are made to the instructions of the function call, it’s local and global
variables, etc. Similarly, when the function is exited, the process leaves this locality.
Reference Book :- Operating System Concepts (8th Edition) Abraham Silberschatz, Peter Baer Galvin, and Greg
Gagne Page No. -386-391.
OS Unit 4 93
Working-Set Model
• This model is based on the above-stated concept of the Locality Model.
The basic principle states that if we allocate enough frames to a process to accommodate its current
locality, it will only fault whenever it moves to some new locality. But if the allocated frames are
lesser than the size of the current locality, then the process is bound to thrash.
• According to this model, based on parameter A, the working set is defined as the set of pages in the
most recent ‘A’ page references. Hence, all the actively used pages would always end up being a part
of the working set.
• The accuracy of the working set is dependent on the value of parameter A. If A is too large, then
working sets may overlap. On the other hand, for smaller values of A, the locality might not be
covered entirely.
• If D is the total demand for frames and is the working set size for process i,
D = WSSi
• Now, if ‘m’ is the number of frames available in the memory, there are 2 possibilities:
• if D > m i.e. total demand exceeds the number of frames, then thrashing will occur as some
processes would not get enough frames.
• D<=m, then there would be no thrashing.
OS Unit 4 94
Keeping Track of the Working Set
Approximate with interval timer + a reference bit
Example: = 10,000
Timer interrupts after every 5000 time units
Keep in memory 2 bits for each page
Whenever a timer interrupts copy and sets the values of all reference bits to 0
If one of the bits in memory = 1 page in working set
Why is this not completely accurate?
Improvement = 10 bits and interrupt every 1000 time units
OS Unit 4 95
Locality of Reference
Locality of reference refers to the tendency of the computer program to access the same
set of memory locations for a particular time period. The property of Locality of
Reference is mainly shown by loops and subroutine calls in a program.
• On an abstract level there are two types of localities which are as follows −
• Temporal locality
• Spatial locality
• Temporal locality
• This type of optimization includes bringing in the frequently accessed memory
references to a nearby memory location for a short duration of time so that the future
accesses are much faster.
OS Unit 4 96
Locality of Reference
• Spatial locality
• This type of optimization assumes that if a memory location has been accessed it is
highly likely that a nearby/consecutive memory location will be accessed as well and
hence we bring in the nearby memory references too in a nearby memory location for
faster access.
OS Unit 4 97
Old Question Papers
OS Unit 4 98
Old Question Papers
OS Unit 4 99
Old Question Papers
OS Unit 4 100
Old Question Papers
OS Unit 4 101
Expected Questions for University Exam
1. Explain paging. Describe how logical address is translated to physical address in a paged system.
2. Explain thrashing. State the cause of thrashing and discuss its solution
3. Differentiate between internal fragmentation and external fragmentation.
4. Define Belady’s anomaly.
5. Differentiate between the paging and segmentation.
6. When do page faults occur? Describe in detail the actions taken by the operating system when a page
faults occur.
7. Consider the following reference string 1,3,2,4,0,1,5,6,0,1,2,3,0,5,6,4,2,1,3,2.7,3,2.
How many page faults will occur for:
• FIFO Page Replacement
• Optimal Page Replacement
• LRU Page Replacement
Assuming three and four frames (initially empty)
OS Unit 4 102
OS Unit 4 103