Cs3451 Unit III Os
Cs3451 Unit III Os
Dynamic Loading
Dynamic Linking
Linking postponed until execution time & is particularly useful for libraries
Small piece of code called stub, used to locate the appropriate memory-
resident library routine or function.
Stub replaces itself with the address of the routine, and executes the routine
Operating system needed to check if routine is in processes‘ memory address
Shared libraries: Programs linked before the new library was installed will
continue using the older library
.
Overlays:
Enable a process larger than the amount of memory allocated to it.
At a given time, the needed instructions & data are to be kept within a
memory.
Swapping
Contiguous Allocation
Each process is contained in a single contiguous section of memory.
There are two methods namely :
Fixed – Partition Method
Variable – Partition Method
Variable-partition method:
o Divide memory into variable size partitions, depending upon the size
of the incoming process.
o When a process terminates, the partition becomes available for
another process.
o As processes complete and leave they create holes in the main
memory.
o Hole – block of available memory; holes of various size are scattered
throughout memory.
Solution:
Fragmentation:
o External Fragmentation – This takes place when enough total memory
space exists to satisfy a request, but it is not contiguous i.e, storage is
fragmented into a large number of small holes scattered throughout the main
memory.
o Internal Fragmentation – Allocated memory may be slightly larger than
requested memory.
Example: hole = 184 bytes
o Solutions:
1. Coalescing : Merge the adjacent holes together.
2. Compaction: Move all processes towards one end of memory, hole
towards other end of memory, producing one large hole of available
memory. This scheme is expensive as it can be done if relocation is
dynamic and done at execution time.
3. Permit the logical address space of a process to be non-contiguous.
This is achieved through two memory management schemes namely
paging and segmentation.
Paging
Paging Hardware
Allocation
o When a process arrives into the system, its size (expressed in pages) is
examined.
o Each page of process needs one frame. Thus if the process requires ‗n‘ pages,
at least ‗n‘ frames must be available in memory.
o If ‗n‘ frames are available, they are allocated to this arriving process.
o The 1st page of the process is loaded into one of the allocated frames & the
frame number is put into the page table.
o Repeat the above step for the next pages & so on.
(a) Before Allocation (b) After Allocation
Frame table: It is used to determine which frames are allocated, which frames are
available, how many total frames are there, and so on.(ie) It contains all the
information about the frames in the physical memory.
= 140 ns.
(iii) Memory Protection
a) Hierarchical Paging
b) Hashed Page Tables
c) Inverted Page Tables
a) Hierarchical Paging
o Break up the Page table into smaller pieces. Because if the page table is
too large then it is quit difficult to search the page number.
Example: “Two-Level Paging “
Address-Translation Scheme
Address-translation scheme for a two-level 32-bit paging architecture
o Each entry in hash table contains a linked list of elements that hash to the
same location.
o Each entry consists of;
(a) Virtual page numbers
(b) Value of mapped page frame.
(c) Pointer to the next element in the linked list.
o Working Procedure:
The virtual page number in the virtual address is hashed into the hash
table.
Virtual page number is compared to field (a) in the 1st element in the
linked list.
If there is a match, the corresponding page frame (field (b)) is used to
form the desired physical address.
If there is no match, subsequent entries in the linked list are searched
for a matching virtual page number.
Clustered page table: It is a variation of hashed page table & is similar to hashed
page table except that each entry in the hash table refers to several pages rather
than a single page.
o It has one entry for each real page (frame) of memory & each entry
consists of the virtual address of the page stored in that real memory
location, with information about the process that owns that page. So, only
one page table is in the system.
o When a memory reference occurs, part of the virtual address ,consisting
of
<Process-id, Page-no> is presented to the memory sub-system.
o In the worst case a process would need n pages plus one byte.It would be
allocated n+1 frames resulting in an internal fragmentation of almost an
entire frame.
Example:
o Relocation.
dynamic
by segment table
o Sharing.
shared segments
same segment number
o Allocation.
first fit/best fit
external fragmentation
o Protection: With each entry in segment table associate:
validation bit = 0 illegal segment
read/write/execute privileges
o Protection bits associated with segments; code sharing occurs at segment
level
o Since segments vary in length, memory allocation is a dynamic storage-
allocation problem
o A segmentation example is shown in the following diagram
Address Translation scheme
EXAMPLE:
Sharing of Segments
o Segments are shared when entries in the segment tables of two different
processes point to the same physical location.
o The IBM OS/ 2.32 bit version is an operating system running on top of the
Intel 386 architecture. The 386 uses segmentation with paging for memory
management. The maximum number of segments per process is 16 KB, and
each segment can be as large as 4 gigabytes.
o Each entry in the LDT and GDT consist of 8 bytes, with detailed information
about a particular segment including the base location and length of the
segment.
The logical address is a pair (selector, offset) where the selector is a16-bit
number:
s g p
13 1 2
o The base and limit information about the segment in question are used to
generate a linear-address.
o First, the limit is used to check for address validity. If the address is not valid, a
memory fault is generated, resulting in a trap to the operating system. If it is
valid, then the value of the offset is added to the value of the base, resulting in
a 32-bit linear address. This address is then translated into a physical address.
o The linear address is divided into a page number consisting of 20 bits, and a
page offset consisting of 12 bits. Since we page the page table, the page
number is further divided into a 10-bit page directory pointer and a 10-bit
page table pointer. The logical address is as follows.
p1 p2 d
10 10 12
o To improve the efficiency of physical memory use. Intel 386 page tables can
be swapped to disk. In this case, an invalid bit is used in the page directory
entry to indicate whether the table to which the entry is pointing is in memory
or on disk.
o If the table is on disk, the operating system can use the other 31 bits to specify
the disk location of the table; the table then can be brought into memory on
demand.
Virtual Memory
Demand Paging
o Instead of swapping in the whole processes, the pager brings only those
necessary pages into memory. Thus,
1. It avoids reading into memory pages that will not be used anyway.
2. Reduce the swap time.
3. Reduce the amount of physical memory needed.
o To differentiate between those pages that are in memory & those that are on
the disk we use the Valid-Invalid bit
Valid-Invalid bit
invalid page
valid page but is currently on the disk
Page Fault
- Memory-Mapped Files
a) Copy-on-Write
o fork() creates a child process as a duplicate of the parent process & it worked
by creating copy of the parent address space for child, duplicating the pages
belonging to the parent.
o Copy-on-Write (COW) allows both parent and child processes to initially
share the same pages in memory. These shared pages are marked as Copy-on-
Write pages, meaning that if either process modifies a shared page, a copy of
the shared page is created.
o vfork():
With this the parent process is suspended & the child process uses the
address space of the parent.
Because vfork() does not use Copy-on-Write, if the child process changes
any pages of the parent‘s address space, the altered pages will be visible to
the parent once it resumes.
Therefore, vfork() must be used with caution, ensuring that the child process
does not modify the address space of the parent.
(b) Memory – mapped files:
- Write the victim page to the disk, change the page & frame tables
accordingly.
3. Read the desired page into the (new) free frame. Update the page and frame
tables.
4. Restart the process
Note:
If no frames are free, two page transfers are required & this situation effectively
doubles the page- fault service time.
Example:
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
o Replace the page that will not be used for the longest period of time.
Example:
Drawback:
o Replace the page that has not been used for the longest period of time.
Example:
o Reference bit
With each page associate a reference bit, initially set to 0
When page is referenced, the bit is set to 1
o When a page needs to be replaced, replace the page whose reference bit is 0
o The order of use is not known , but we know which pages were used and
which were not used.
o If reference bit is 00000000 then the page has not been used for 8 time
periods.
o If reference bit is 11111111 then the page has been used atleast once
each time period.
o If the reference bit of page 1 is 11000100 and page 2 is 01110111 then
page 2 is the LRU page.
(ii) Second Chance Algorithm
o Basic algorithm is FIFO
o When a page has been selected , check its reference bit.
If 0 proceed to replace the page
If 1 give the page a second chance and move on to the next
FIFO page.
When a page gets a second chance, its reference bit is
cleared and arrival time is reset to current time.
Hence a second chance page will not be replaced until all
other pages are replaced.
o Keep a counter of the number of references that have been made to each
page
1. Least Frequently Used (LFU )Algorithm: replaces page with
smallest count
2. Most Frequently Used (MFU )Algorithm: replaces page with
largest count
It is based on the argument that the page with the
smallest count was probably just brought in and has yet
to be used
Page Buffering Algorithm
o These are used along with page replacement algorithms to improve their
performance
Technique 1:
Then S = ∑ si
ai = si / S * m
where ai is the no.of frames allocated to process i.
o Global replacement – each process selects a replacement frame from the set
of all frames; one process can take a frame from another.
o Local replacement – each process selects from only its own set of allocated
frames.
Thrashing
1. Working-Set Strategy
Other Issues
o Prepaging
To reduce the large number of page faults that occurs at process
startup
Prepage all or some of the pages a process will need, before they are
referenced
But if prepaged pages are unused, I/O and memory are wasted
o Page Size
Page size selection must take into consideration:
o fragmentation
o table size
o I/O overhead
o locality
o TLB Reach
TLB Reach - The amount of memory accessible from the TLB
TLB Reach = (TLB Size) X (Page Size)
Ideally, the working set of each process is stored in the TLB.
Otherwise there is a high degree of page faults.
Increase the Page Size. This may lead to an increase in fragmentation
as not all applications require a large page size
Provide Multiple Page Sizes. This allows applications that require
larger page sizes the opportunity to use them without an increase in
fragmentation.
o I/O interlock
Pages must sometimes be locked into memory
Consider I/O. Pages that are used for copying a file from a device
must be
locked from being selected for eviction by a page replacement
algorithm.