0% found this document useful (0 votes)
0 views

Unit 3 and 4 notes OS

The document discusses deadlocks in operating systems, defining them as situations where processes are blocked due to resource holding and waiting conditions. It outlines prerequisites for deadlocks, strategies for handling them, and methods for information management, including file systems and access methods. Additionally, it describes directory structures and operations related to file management.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Unit 3 and 4 notes OS

The document discusses deadlocks in operating systems, defining them as situations where processes are blocked due to resource holding and waiting conditions. It outlines prerequisites for deadlocks, strategies for handling them, and methods for information management, including file systems and access methods. Additionally, it describes directory structures and operations related to file management.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Unit III-Deadlocks and Information Management

3.1 Deadlocks:
3.1.1 Concept:
Deadlock is a situation where a set of processes are blocked because each process is
holding a resource and waiting for another resource acquired by some other process.

Consider an example when two trains are coming toward each other on the same track and
there is only one track, none of the trains can move once they are in front of each other.

A similar situation occurs in operating systems when there are two or more processes that
hold some resources and wait for resources held by other(s).

EXAMPLE OF DEADLOCK

A real-world example would be traffic, which is going only in one direction. Here, a bridge
is considered a resource. So, when Deadlock happens, it can be easily resolved if one car
backs up (Preempt resources and rollback). Several cars may have to be backed up if a
deadlock situation occurs. So starvation is possible.

3.1.2 GRAPHICAL REPRESENTATION DEADLOCK


To represent the relationship between processes and resources, a certain graphical notation is
used:

• As shown in figure Resources R1 and R2 are shown in rectangles whereas Processes P1


and P2 are shown in Hexagons.
• The arrows show the relationships.
• The first figure means that resource R1 is assigned to process P1.
• The second figure means that the process P2 wants the resource R2.
• These graphs are called Directed Resource Allocation Graphs (DRAG). They help us in
understanding the process of detection of a deadlock.
1
Scenario 1: P1 holds R1 but demands R2
P2 holds R2 but demands R1

If we draw a DRAG for this situation, it will look as shown below:

P1

R1 R2

P2

Thus it forms a closed loop causing a circular wait condition.

For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2
which is acquired by process 2, and process 2 is waiting for resource 1.

3.1.3 DEADLOCK PRE-REQUISITE


1. Mutual Exclusion: One or more than one resource are non-shareable (Only one process can
use at a time)

There should be a resource that can only be held by one process at a time. In the diagram below,
there is a single instance of Resource 1 and it is held by Process 1 only.

2
2. Hold and Wait: A process is holding at least one resource and waiting for resources
A process can hold multiple resources and still request more resources from other processes
which are holding them. In the diagram given below, Process 2 holds Resource 2 and Resource 3
and is requesting the Resource 1

3. No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
A resource cannot be preempted from a process by force. A process can only release a resource
voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from Process 1. It will
only be released when Process 1 relinquishes it voluntarily after its execution is complete.

4. Circular Wait: A set of processes are waiting for each other in circular form.

A process is waiting for the resource held by the second process, which is waiting for the resource
held by the third process and so on, till the last process is waiting for a resource held by the first
process. This forms a circular chain.

For example: Process 1 is allocated Resource2 and it is requesting Resource 1. Similarly, Process
2 is allocated Resource 1 and it is requesting Resource 2. This forms a circular wait loop.

3
3.1.4 CONCEPTS OF DEADLOCK STRATEGIES
DEADLOCK IGNORANCE
• Ignore the problem altogether: Most widely used approach among all the mechanism.

• This is being used by many operating systems.


• In this approach, the Operating system assumes that deadlock never occurs.
• It simply ignores deadlock. This approach is best suitable for a single end user system
where User uses the system only for browsing and all other normal stuff.
• The operating systems like Windows and Linux mainly focus upon performance. However,
the performance of the system decreases if it uses deadlock handling mechanism all the
time if deadlock happens 1 out of 100 times then it is completely unnecessary to use the
deadlock handling mechanism all the time.
• In these types of systems, the user has to simply restart the computer in the case of
deadlock. Windows and Linux are mainly using this approach.
DEADLOCK DETECTION
• The DRAG graph can help in detecting a deadlock. But this DRAG graphs can become
complex in realistic situations.
• A deadlock can be detected by a resource scheduler as it keeps track of all the resources
that are allocated to different processes.
• The OS can use the following mechanisms to detect a deadlock:
1. Number all the processes and resources separately
2. Maintain tables showing the allocation of processes to the resources.
3. When process asks for a resource, OS checks these tables first to detect a deadlock
• After a deadlock is detected, it can be resolved using the following methods.

1. All the processes that are involved in the deadlock are terminated. This is not a good
approach as all the progress made by the processes is destroyed.

2. Resources can be preempted from some processes and given to others till the
deadlock is resolved.

DEADLOCK RECOVERY
• This approach let the processes fall in deadlock and then periodically check whether
deadlock occur in the system or not. If it occurs then it applies some of the recovery
methods to the system to get rid of deadlock.
1. Suspend/Resume a process:
– A process is selected based on a variety of criteria (low priority) and it is
suspended for a long time. The resources are reclaimed from that process and

4
then allocated to other processes that are waiting for them. When one of the
waiting processes gets over, the original suspended process is resumed.
2. Kill a process:
– The Operating system decides to kill a process and reclaim all its resources after
ensuring that such action will solve the deadlock. This solution is simple but
involves loss of at least one process.

DEADLOCK PREVENTION
• The idea is to not let the system into a deadlock state.

• Prevention is done by negating one of necessary conditions for deadlock.


• Deadlock happens only when Mutual Exclusion, hold and wait (Wait for), No preemption
and circular wait holds simultaneously. (Refer Page 2 for more explanation)
• If it is possible to violate one of the four conditions at any time then the deadlock can
never occur in the system.
• The idea behind the approach is very simple that we have to fail one of the four
conditions.

DEADLOCK AVOIDANCE
• Avoidance is kind of futuristic in nature.
• By using strategy of “Avoidance”, an assumption is made.
• Need to ensure that all information about resources which process will require is known
to prior to execution of the process.
• Banker’s algorithm is used to avoid deadlock.

• In deadlock avoidance, the operating system checks whether the system is in safe state
or in unsafe state at every step which the operating system performs.

• The process continues until the system is in safe state. Once the system moves to unsafe
state, the OS has to backtrack one step.

• In simple words, The OS reviews each allocation so that the allocation doesn't cause the
deadlock in the system.

5
UNIT III- 3.2 INFORMATION MANAGEMENT

Information management is the management of organizational processes and systems that


acquire, create, organize, distribute, and use information.
According to a process view of information management, it is a continuous cycle of six closely
related activities:
1. Identification of information needs
2. Acquisition and creation of information
3. Analysis and interpretation of information
4. Organization and storage of information
5. Information access and dissemination
6. Information use

3.2.1 SIMPLE FILE SYSTEM


Introduction to File System
A file can be "free formed", indexed or structured collection of related bytes having
meaning only to the one who created it. Or in other words an entry in a directory is
the file. The file may have attributes like name, creator, date, type, permissions etc.

File Structure
A file has various kinds of structure. Some of them can be:
1. Simple Record Structure with lines of fixed or variable lengths.
2. Complex Structures like formatted document or reloadable load files.
3. No Definite Structure like sequence of words and bytes etc.

File Attributes
Following are some of the attributes of a file:
Name: It is the only information which is in human-readable form.
Identifier: The file is identified by a unique tag(number) within file system.
Type: It is needed for systems that support different types of files.
Location: Pointer to file location on device.
Size: The current size of the file.
Protection: This controls and assigns the power of reading, writing, executing.
Time, date, and user identification. This is the data for protection, security, and usage
monitoring.

3.2.2 FILE ACCESS METHODS

The way that files are accessed and read into memory is determined by Access methods.
Usually a single access method is supported by systems while there are OS's that support
multiple access methods.

1. Sequential Access

Most of the operating systems access the file sequentially. In other words, we can say
that most of the files need to be accessed sequentially by the operating system.

6
In sequential access, the OS read the file word by word. A pointer is maintained which
initially points to the base address of the file. If the user wants to read first word of the file
then the pointer provides that word to the user and increases its value by 1 word. This
process continues till the end of the file.

Modern word systems do provide the concept of direct access and indexed access but
the most used method is sequential access due to the fact that most of the files such as
text files, audio files, video files, etc. need to be sequentially accessed.

2. Direct /Random/Relative Access

The Direct Access is mostly required in the case of database systems. In most of the
cases, we need filtered information from the database. The sequential access can be
very slow and inefficient in such cases.

Suppose every block of the storage stores 4 records and we know that the record we
needed is stored in 10th block. In that case, the sequential access will not be implemented
because it will traverse all the blocks in order to access the needed record.

Direct access will give the required result despite of the fact that the operating system
has to perform some complex tasks such as determining the desired block number.
However, that is generally implemented in database applications.

7
3. Indexed Sequential Access
What is a Directory?
Information about files is maintained by Directories. A directory can contain multiple files.
It can even have directories inside of them. In Windows we also call these directories as
folders.
Following is the information maintained in a directory:

• Name : The name visible to user.


• Type : Type of the directory.
• Location : Device and location on the device where the file header is located.
• Size : Number of bytes/words/blocks in the file.
• Position : Current next-read/next-write pointers.
• Protection : Access control on read/write/execute/delete.
• Usage : Time of creation, access, modification etc.
• Mounting : When the root of one file system is "grafted" into the existing tree of
another file system its called Mounting.

HIERARCHICAL DIRECTORY SYSTEMS


A directory is a container that is used to contain folders and file. It organizes files and
folders into a hierarchical manner.

8
a) SINGLE-LEVEL DIRECTORY
• Single level directory is simplest directory structure. In it all files are contained in same
directory which make it easy to support and understand.

• A single level directory has a significant limitation, however, when the number of files
increases or when the system has more than one user. Since all the files are in the same
directory, they must have the unique name. if two users call their dataset test, then the
unique name rule violated.
• Advantages:
– Since it is a single directory, so its implementation is very easy.
– If the files are smaller in size, searching will become faster.
– The operations like file creation, searching, deletion, updating are very easy in such
a directory structure.
• Disadvantages:
– There may chance of name collision because two files cannot have the same name.
– Searching will become time taking if the directory is large.

b) TWO-LEVEL DIRECTORY
• A single level directory often leads to confusion of files names among different users. the
solution to this problem is to create a separate directory for each user.
• In the two-level directory structure, each user has their own user files directory (UFD). The
UFDs has similar structures, but each list only the files of a single user. system’s master
file directory (MFD) is searches whenever a new user id=s logged in. The MFD is indexed
by username or account number, and each entry points to the UFD for that user.

• Advantages:
– We can give full path like /User-name/directory-name/.
– Different users can have same directory as well as file name.
– Searching of files become easier due to path name and user-grouping.

9
• Disadvantages:
– A user is not allowed to share files with other users.
– Still it not very scalable, two files of the same type cannot be grouped together in
the same us

c) TREE-STRUCTURED DIRECTORY
• Once we have seen a two-level directory as a tree of height 2, the natural generalization is
to extend the directory structure to a tree of arbitrary height. This generalization allows the
user to create their own subdirectories and to organize on their files accordingly.

• The tree has a root directory, and every file in the system have a unique path.
• Advantages:
– Very generalize, since full path name can be given.
– Very scalable, the probability of name collision is less.
– Searching becomes very easy, we can use both absolute path as well as relative.
• Disadvantages:
– Every file does not fit into the hierarchical model, files may be saved into multiple
directories.
– We cannot share files.
– It is inefficient, because accessing a file may go under multiple directories.
----------------------------------------------------****---------------------------------------------------------
ACCESS PATHS
• An Access Path is the directory path required to access a file. Access Paths are used by the
IDE in search operations to find files. In particular, Access Paths are used to find the
library, source, and header files in a Build Target. If the compiler can’t find a file used in
the Build Target, your program will not compile. Essentially these are the Paths to where
your sources/libraries are located on the computer. The access path is the
location Microsoft DOS or Windows looks when a command is not an internal command
or in the current directory.
• For example, when typing in "fdisk" as a command, if the access path is not set to the
location of fdisk.exe, you'll receive a "bad command or file name" error message. Even if
this file exists elsewhere on the computer.
10
DIRECTORY OPERATIONS
• Create
– A directory is created. It is empty except for dot and dotdot, which are put there
automatically by the system.
• Delete
– A directory is delete. Here, only those directory can be deleted which are empty.
• Opendir
– Directories can be read. But before reading any directory, it must be opened first.
Therefore, to list all the files present in a directory, a listing program opens that
required directory to read out the name of all files that this directory contains.
• Closedir
– Directory should be closed just to free up the internal table space when it has been
read.
• Readdir
– This call returns the next entry in an open directory.
• Rename
– Directory can also be renamed just like the files.
• Link
– Linking is a technique that allows a file to appear in more than one directory.
• Unlink
– A directory entry is removed.

------------------------------------------------------*****----------------------------------------------------

3.2.4 FILE PROTECTION


In computer systems, a lot of user’s information is stored, the objective of the operating
system is to keep safe the data of the user from the improper access to the system.
Protection can be provided in number of ways. For a single laptop system, we might
provide protection by locking the computer in a desk drawer or file cabinet. For multi-
user systems, different mechanisms are used for the protection.
Types of Access:
The files which have direct access of the any user have the need of protection. The files
which are not accessible to other users doesn’t require any kind of protection. The
mechanism of the protection provides the facility of the controlled access by just limiting
the types of access to the file. Access can be given or not given to any user depends on
several factors, one of which is the type of access required.

Several different types of operations can be controlled:


• Read –Reading from a file.
• Write –Writing or rewriting the file.
• Execute –Loading the file and after loading the execution process starts.
• Append –Writing the new information to the already existing file, editing must be
end at the end of the existing file.
• Delete –Deleting the file which is of no use and using its space for another data.
11
• List –List the name and attributes of the file.

Operations like renaming, editing the existing file, copying; these can also be controlled.
There are many protection mechanisms. each of them mechanism have different
advantages and disadvantages and must be appropriate for the intended application.

Access Control:

There are different methods used by different users to access any file. The general
way of protection is to associate identity-dependent access with all the files and
directories an list called access-control list (ACL) which specify the names of the
users and the types of access associate with each of the user.

The main problem with the access list is their length. If we want to allow everyone
to read a file, we must list all the users with the read access.

This technique has two undesirable consequences:


Constructing such a list may be tedious and unrewarding task, especially if we do
not know in advance the list of the users in the system.
Previously, the entry of the any directory is of the fixed size but now it changes to
the variable size which results in the complicates space management. These
problems can be resolved by use of a condensed version of the access list.

To condense the length of the access-control list, many systems recognize three
classification of users in connection with each file:
• Owner –Owner is the user who has created the file.
• Group –A group is a set of members who has similar needs and they are
sharing the same file.
• Universe –In the system, all other users are under the category called universe.

The most common recent approach is to combine access-control lists with the
normal general owner, group, and universe access control scheme. For example:
Solaris uses the three categories of access by default but allows access-control
lists to be added to specific files and directories when more fine-grained access
control is desired.

Other Protection Approaches:


The access to any system is also controlled by the password. If the use of password
are is random and it is changed often, this may be result in limit the effective access to
a file.
The use of passwords has a few disadvantages:
• The number of passwords are very large so it is difficult to remember the large
passwords.
• If one password is used for all the files, then once it is discovered, all files are
accessible; protection is on all-or-none basis.

-----------------------------------------------------*****----------------------------------------------------

12
UNIT IV- MEMORY MANAGEMENT
4.1 Functions of Memory Management
• To keep track of all memory locations-free or allocated and if allocated, to which
process and how much.
• To decide the memory allocation policy i.e. which process should get how much
memory, when and where.
• To use various techniques and algorithms to allocate and de-allocate memory
locations.
4.1.1 Issues in memory management scheme
Premature frees and dangling pointers
Many programs give up memory, but attempt to access it later crash or behave randomly.
This condition is known as a PREMATURE FREE, and the surviving reference to the
memory is known as a DANGLING POINTER. This is usually confined to manual
memory management.
Memory leak
Some programs continually allocate memory without ever giving it up and eventually run
out of memory. This condition is known as a MEMORY LEAK.
External fragmentation
A poor allocator can do its job of giving out and receiving blocks of memory so badly that
it can no longer give out big enough blocks despite having enough spare memory. This
is because the free memory can become split into many small blocks, separated by
blocks still in use. This condition is known as EXTERNAL FRAGMENTATION.
Poor locality of reference
Another problem with the layout of allocated blocks comes from the way that modern
hardware and operating system memory managers handle memory: successive memory
accesses are faster if they are to nearby memory locations. If the memory manager
places far apart the blocks a program will use together, then this will cause performance
problems. This condition is known as poor LOCALITY OF REFERENCE.
Relocation and address translation
Relocation and Address Translation refers to the problem that arises because at the time
of compilation, the exact physical memory locations that a program is going to occupy at
the runtime are not known. Therefore the compiler generates the executable machine
code assuming that each program is going to be loaded from memory word 0. At the
execution time, the program may need to be relocated to different locations, and all the
addresses will need to be changed before execution.

Protection and sharing


• Protection refers to the preventing of one program from interfering with other
programs.
13
• Sharing is the opposite of protection. In this case, multiple processes have to refer
to the same memory locations. This need may arise because the processes might
be using the same piece of data or all processes might want to run the same
program e.g. word processor.

4.2 Contiguous Real Memory Management Techniques


• In contiguous memory management program is loaded in contiguous memory
location.
Types:
• Single contiguous memory allocation
• Fixed partition memory allocation
• Variable partition memory allocation

4.2.1 Single Contiguous Memory Management


• It is the easiest memory management technique.
• In this method the physical memory is divided into two areas.
• One which is permanently allocated to the Operating System and the other to the
user process.
• At any time only one user process is in the memory.
• This process is run to completion and next process is brought in the memory.
• All the ready processes are held on the disk whereas the Operating System holds
their PCBs (process control block) in the memory in the order of priority.

14
• At any time one of them runs in the main memory.
• When this process is blocked, it is 'swapped out' from the main memory to the
disk.
• The next highest priority process is 'swapped in' the main memory from the disk
and it starts.
• Thus, there is only one process in the main memory even if conceptually it is a
multi-programming system.
• For example, MS-DOS operating system allocates memory in this way.

4.2.2 Fixed Partitioned Memory Management

• Main memory is divided into various sections called as partition.


• This partition can be of different sizes but once decided at the time of system
generation they cannot be changed.
• This partition are called static partition.
15
• On declaring static partition Operating System creates a partition description table
(PDT).
• Initially all the entries are marked as FREE and when a process is loaded into one
of the partition the status entry for the process is changed to ALLOCATED.
This method works as follows.
• When a partition is to be allocated to a process the long term schedular of the
process management module decides which process is to be brought into the
memory next.
• It then finds out the size of the process to be loaded by consulting the information
management module of the Operating System.
• It makes a request to memory management to allocate the partition with
appropriate size.
• With the help of the information management module it loads the program in the
allocated partition.

4.2.3 Variable Partitioned Memory Management


• In variable partitions the number of partition and the sizes are variable.
• They are not defined at the time of system generation.
• At any time any partitions of the memory may be allocated or free(unallocated) to
some process.

• The Operating System is loaded in the memory. All the rest of the memory is free.
• A program P I is loaded in the memory and it starts executing (after which it
becomes a process).
• A program P2 is loaded in the memory and it starts executing (after which it
becomes a process).
• A program P3 is loaded in the memory and it starts executing (after which it
becomes a process).
• The process PI is blocked at certain time. After a while, a new high priority program
P4 wants to occupy the memory.

16
• Let us assume that P4 is smaller than PI but bigger than the free area available at
the bottom.
• P4 is now loaded in the memory and it starts executing (after which it becomes a
process).
• Note that P4 is loaded in the same space where PI was loaded.
• After sometime P2 is completed whereas P3 and P4 continue.
• The area at the top and one released by P2 can be joined together to form a larger
partition.
• P1 is swapped in when the input output is completed.
• Another process P5 is also loaded in the memory.
Advantages:
1. No Internal Fragmentation: In variable Partitioning, space in main memory is allocated
strictly according to the need of process, hence there is no case of internal
fragmentation. There will be no unused space left in the partition.

2. No restriction on Degree of Multiprogramming:


More number of processes can be accommodated due to absence of internal
fragmentation. A process can be loaded until the memory is empty.

3. No Limitation on the size of the process:


In Fixed partitioning, the process with the size greater than the size of the largest
partition could not be loaded and process cannot be divided as it is invalid in contiguous
allocation technique. Here, in variable partitioning, the process size can’t be restricted
since the partition size is decided according to the process size.

Disadvantages:
• Difficult Implementation: Implementing variable Partitioning is difficult as compared to
Fixed Partitioning as it involves allocation of memory during run-time rather than during
system configure.
• External Fragmentation: There will be external fragmentation in spite of absence of
internal fragmentation. For example, process P1(2MB) and process P3(1MB) completed
their execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s suppose process P5 of
size 3MB comes. The empty space in memory cannot be allocated as no spanning is
allowed in contiguous allocation. The rule says that process must be contiguously present
in main memory to get executed. Hence it results in External Fragmentation.

4.2.4 Fragmentation
1. Internal Fragmentation
• Internal Fragmentation is found in fixed partition scheme.
• To overcome the problem of internal fragmentation, instead of fixed partition
scheme, variable partition scheme is used.

17
2. External Fragmentation
• External Fragmentation is found in variable partition scheme.
• To overcome the problem of external fragmentation, compaction technique is used
or non-contiguous memory management techniques are used.

4.3 Contiguous Real Memory Management Techniques

Logical vs. Physical Address Space


• Logical address – generated by the CPU; also referred to as virtual address.
Page address is called logical address and represented by page number and
the offset.
• Logical Address = Page number + page offset
• Physical address – address seen by the memory unit. Frame address is
called physical address and represented by a frame number and the offset.
• Physical Address = Frame number + page offset
18
• The user program deals with logical addresses; it never sees the real physical
addresses.
4.3.1 Paging
• Memory-management technique that permits the physical address space of a
process to be non-contiguous.
• Each process is divided into a number of small, fixed-size partitions called pages
or Divide logical memory into blocks of same size called pages.
• Physical memory is divided into a large number of small, fixed-size partitions called
frames.
• When a process is to be executed, its pages are loaded into any available memory
frames.
• Some internal fragmentation, but no external fragmentation.
Page Map Table (PMT): A data structure called page map table is used to keep track of
the relation between a page of a process to a frame in physical memory.

Relocation and Address Translation


• Operating system maintains a page table for each process
– Contains the frame number for each process page
– To translate logical to physical addresses
• A logical address is divided into:
– Page number (p) – used as an index into a page table which contains
base address of each page in physical memory.
– Page offset (d) – combined with base address to define the physical
memory address that is sent to the memory.

19
Address translation in paging

Example: Paging Model of memory is shown:

20
• To run a program of size n pages, need to find n free frames and load program.

How paging works?


• The process address space of program is divided into contiguous pages.
• Each page has got two parameters; page number (p) and displacement (d).
• Memory is divided into fixed size frame.
• The OS keeps track of free frame and allocates free frames to process when it wants.

• Any page can be placed in any free available frame.


• After the page (p) is loaded in frame (f) the OS marks the frame as allocated.
• Logical address is p, d after loading the address become f, d.
• As the size of the page and the frame is the same displacement appears in both the
addresses.
• During execution, for every address the translation mechanism has to find out the page
number, the frame number and append displacement (d) to arrive at final physical
address.

Page Table Implementation


• Each process must have a Page Map Table (PMT)
• PMT must be large enough to have as many entries as maximum number of pages per
process
• Very few processes use up all the pages
• Thus, there is scope for reducing PMT length and save memory

• This is achieved by a register called Page Map Table Limit Register (PMTLR)
• It contains the no. of pages contained in a process.
• There is 1 PMTLR for each PMT
• PMTLR is present inside the PCB of each process

Page Table Implementation (Software method)


• In this method, the operating system keeps all the PMTs in the main memory
• The starting word address of the PMT for a process is known at the time the process is
created, its pages are loaded and a PMT is created and stored in the main memory
• This address is also stored in the PCB
• During a context switch, this address is loaded from the PCB into a register called as
PMTBR (Page Map Table Base Register)
• This register is used to locate the PMT itself in the memory.
21
• If a process is swapped out during a context switch and if it comes back later again in
different page frame
• Then a new PMT is created which can be present at different memory location
• Thus, PMTBR will also change

• The following steps are followed to convert logical address to physical


address:

MERITS:
1. Simple method to implement
2. Inexpensive

22
DEMERITS:
1. Slow process
2. Additional memory reference is required to find PMT

Advantages of Paging:
1. Easy to allocate memory
2. Easy to swap — pages, frames, and often disk blocks as well, all are same size.

3. No external fragmentation: Any page can be placed in any frame in physical


memory

4. Sharing of the common code is possible.


Disadvantages of Paging:
1. Internal fragmentation: Page size may not match size needed by process.
2. Page tables are fairly large, so page tables are too big to fit in registers.

3. This table lookup adds an extra memory reference for every address translation.

4.3.2 Segmentation
• A segment can be defined as a logical grouping of instructions, such as a subroutine,
array, or a data area. A program is a collection of segments. Segmentation is a
technique for managing these segments. Divide a process into unequal size blocks called
segments. Each segment has a name and a length.

A segment is a logical unit such as:


• main program, procedure, function
• local variables, global variables, common
block
• stack, symbol table, arrays

Advantages:
• Protect each entity independently
• Allow each segment to grow independently
• Share each segment independently
• Each of these segments are of variable
length and length will be defined in the
program.

23
Segmentation Architecture
• Segment Table

– Maps two-dimensional user-defined addresses into one-dimensional physical


addresses. Each table entry has
• Base - contains the starting physical address where the segments reside
in memory.
• Limit - specifies the length of the segment.
• A logical address consists of two parts.
– A segment number, s, and an offset, d.
• Segment number used as index into the segment table.

• The offset d is between 0 and the segment limit.


• The offset is added to the base to produce the physical address.

• If the offset is not within the limit, trap to the operating system. (logical address beyond
end of segment).

24
Segmentation Example:

Address Translation

25
Advantages and Disadvantages of Segmentation
• Advantages:
– Each segment can be
1. located independently
2. separately protected
3. grow independently
4. Segments can be shared between processes.
5. Eliminate the fragmentation.

• Disadvantages:
– Allocation algorithms as for memory partitions
– External fragmentation, Solution: combine segmentation and paging.

Segmentation vs. Paging

26
4.4 Concept of Virtual Memory
• A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a hard
disk that's set up to emulate the computer's RAM.
• The main visible advantage of this scheme is that programs can be larger than physical
memory.
• Virtual memory serves two purposes.
– First, it allows us to extend the use of physical memory by using disk.
– Second, it allows us to have memory protection, because each virtual address is
translated to a physical address.

4.4.2 Definitions
Locality of Reference
• Locality of Reference refers to a phenomenon in which a computer program tends to
access same set of memory locations for a particular time period.
• Locality of Reference refers to the tendency of the computer program to access
instructions whose addresses are near one another.
• The property of locality of reference is mainly shown by loops and subroutine calls in a
program.

Working Set
• Working set is a concept in computer science which defines the amount of memory that
a process requires in a given time interval.
• The set of pages that a process is currently using is called its working set

Page Replacement Policy


• FIFO
– In this algorithm, the operating system keeps track of all pages in the memory in
a queue, the oldest page is in the front of the queue. When a page needs to be
replaced page in the front of the queue is selected for removal.
• NRU
– This algorithm removes a page at random from the lowest numbered non-empty
class. Implicit in this algorithm is that it is better to remove a modified page that
has not been referenced in at least one clock tick than a clean page that is in
heavy use.
• LRU
– In this algorithm page will be replaced which is least recently used.

27
Dirty Page/Dirty Bit
• Dirty page is the page in memory (page cache) that have been changed from what is
currently stored on disk. This usually happens when an existing file on the disk is altered
or appended.
• Dirty bit: one bit for each page frame, set by hardware whenever the page is modified.
If a dirty page is replaced, it must be written to disk before its page frame is reused.

Demand Paging
• Demand paging is a method of virtual memory management.

• In a system that uses demand paging, the operating system copies a disk page into
physical memory only if an attempt is made to access it and that page is not already in
memory (i.e., if a page fault occurs).
• It follows that a process begins execution with none of its pages in physical memory and
many page faults will occur until most of a process's working set of pages are located in
physical memory.
• Demand paging follows that pages should only be brought into memory if the executing
process demands them.

UNIT 3 QUESTION BANK:


DEADLOCKS:

1. What is deadlock? Explain with the help of examples. 3/6 marks


2. What do you mean by Directed Resource Allocation Graphs (DRAG)? 3 marks
3. Explain how deadlocks are represented graphically. 3/6 marks
4. Explain the deadlock pre-requisites in details with the help of suitable examples. 3/6 marks
5. Explain the 4 necessary conditions necessary for deadlocks to occur. 3/6 marks
6. Write short notes on:
a) Deadlock Ignorance
b) Deadlock avoidance
c) Deadlock Detection
d) Deadlock Prevention
e) Deadlock Recovery 3 / 6 marks
7. What do you mean by Circular wait? Give example. 3/ 6 marks
8. What do you mean by Hold and wait? Give example. 3/ 6 marks
9. What do you mean by Mutual exclusion? Give example. 3/ 6 marks
10. What do you mean by No Preemption? Give example. 3/ 6 marks
11. How OS detects a deadlock and resolves it? 3 marks
12. Explain the mechanisms used to recover from a deadlock. 3/6 marks

DEADLOCKS:

1. What do you mean by Information management? 3 marks


2. Define file. 3 marks

28
3. What are the activities involved in Information management? 3 marks
4. What do you mean by file structure? 3 marks
5. Explain the various attributes of a file? 3/ 6 marks
6. Explain the various File access methods in detail. 3/6 marks
7. Explain the following File access methods:
a) Sequential file access
b) Direct access
c) Indexed Sequential access 3/6 marks
8. What do you mean by Directory and Directory structure. 3/ 6marks
9. Explain the information maintained in a directory. 3/6 marks
10. Explain hierarchical Directory system with the help of a suitable diagram. 3/ 6 marks
11. What do you mean by Access Paths? 3 marks
12. Explain the various directory operations in detail. 3/6 marks
13. Why file protection is important in file system? 3 marks
14. What are the various access types of file? 3/6 marks
15. What do you mean by access control? Explain in detail. 3/ 6marks

UNIT 4 QUESTION BANK:


1. What do you mean by Memory management? 3 marks
2. Explain the various issues in memory management? 3/ 6 marks
3. What do you mean by relocation and address translation? 3/ 6marks
4. Explain the functions of Memory management? 3/6 marks
5. What do you mean by Contiguous Real memory management? 3 marks
6. Explain the various Contiguous Real memory management techniques. 6 marks
7. Explain the various Non-Contiguous Real memory management techniques. 6 marks
8. With the help of a neat diagram example explain the single Contiguous memory management
technique. 3/6 marks
9. With the help of a neat diagram example explain the fixed partitioned memory management
technique. 3/6 marks
10. With the help of a neat diagram example explain the variable partitioned memory management
technique. 3/6 marks
11. What do you mean by Internal Fragmentation. Give example. 3 marks
12. What do you mean by External Fragmentation. Give example. 3 marks
13. Explain Internal and External Fragmentation with the help of examples. 3/ 6marks
14. Give advantages and disadvantages of single Contiguous memory management technique. 3/6
marks
15. Give advantages and disadvantages of fixed partitioned memory management technique. 3/6
marks
16. Give advantages and disadvantages of variable partitioned memory management technique.
3/6 marks
17. What do you mean by paging? 3 marks
18. Explain in detail any two Non-Contiguous Real memory management techniques. 6 marks
19. What do you mean by Non-Contiguous Real memory management? 3 marks
20. Explain paging mechanism with the help of a neat diagram. 3/6 marks
21. Explain Segmentation mechanism with the help of a neat diagram. 3/6 marks
22. Explain paging with the help of a suitable example. 3/6 marks
23. Give comparison/ differences between Paging and Segmentation. 3/ 6 marks
24. Give advantages and disadvantages of Paging? 3/6 marks
25. Give advantages and disadvantages of Segmentation? 3/6 marks
26. What is a Page Map Table? 3 marks
27. Explain how a page map table is implemented using a software method. 6 marks
29
28. Explain the mechanism of relocation and address translation during paging. 3/6 marks
29. Explain the mechanism of relocation and address translation during segmentation. 3/6 marks
30. What is a Physical and Logical address. 3 marks
31. What do you mean by virtual memory? 3 marks
32. Define the terms:
a) Locality of Reference
b) Dirty bit
c) Dirty Page
d) Page fault
e) Demand Paging
f) Working set
33. What are page replacement policies? 3 marks

30

You might also like