BSc_3rdSem_Course8_OS
BSc_3rdSem_Course8_OS
01. Define Operating system. Write about history & evolution of Operating Systems. - 03
02. Write about functions of Operating Systems. - 05
03. What are the different types of Operating Systems? - 06
04. Explain about Resource Abstraction in OS. - 11
Model Paper 1 - 77
Model Paper 2 - 78
01. Define Operating system. Write about history & evolution of Operating Systems.
Operating System:
- The operating system is a system program that serves as an interface between the computing
system and the end-user.
- Operating systems create an environment where the user can run any programs or communicate
with software or applications in a comfortable and well-organized way.
- An operating is a software program that manages and controls the execution of application
programs, software resources and computer hardware.
- It also helps manage the software/hardware resource, such as file management, memory
management, input/ output and many peripheral devices like a disk drive, printers, etc.
- Some of the popular operating systems are: Linux OS, Windows OS, Mac OS etc.
- When the first electronic computer was developed in 1940, it was created without any operating
system.
- In early times, users have full access to the computer machine and write a program for each task
in machine language.
- The programmer can perform and solve only simple mathematical calculations during the
computer generation, and this calculation does not require an operating system.
- During the late 1960s, designers develop a new operating system that can perform multiple
tasks in a single computer program called multiprogramming.
- The introduction of multiprogramming plays a very important role in developing operating
systems that allow a CPU to be busy every time by performing different tasks on a computer at
the same time.
- During the third generation, there was a new development of minicomputer's phenomenal
growth starting in 1961 with the DEC PDP-1. These PDP's leads to the creation of personal
computers in the fourth generation.
- The fourth generation of operating systems is related to the development of the personal
computer. However, the personal computer is very similar to the minicomputers that were
developed in the third generation.
- The cost of a personal computer was very high at that time. A major factor related to creating
personal computers was the birth of Microsoft and the Windows operating system.
- In 1981, Microsoft introduced the MS-DOS (Disc Operating System); however, it was very
difficult for the person to understand its commands.
- After that, Windows released various operating systems such as Windows 95, Windows 98,
Windows XP, Windows 7 etc.
- Currently, most Windows users use the Windows 10 operating system.
- Besides the Windows operating system, Apple is another popular operating system built in the
1980s, and this operating system was developed by Steve Jobs, a co-founder of Apple.
- They named the operating system Macintosh OS or Mac OS.
Operating System:
- Operating system acts as an interface between the user & h/w components of the computer.
- Operating system is the first program to be loaded into the computer during booting & remains in the
memory all the time.
The basic functions of operating systems are listed below.
- Performs basic computer tasks such as managing keyboard, mouse, printer etc.
- When a new device is connected to the computer, it will be automatically detected.
- The operating system manages Computer’s resources like CPU, memory & I/O devices.
- The operating system provides user interface to easily interact with the computer. These are
- CLI(Command Line Interface) & GUI(Graphical User Interface).
- Operating system provides interface for the user to develop application programs & make sure
that these applications run on other computers with same or different h/w.
- The operating system enables the user to execute more than one process at a time.
- The operating system is responsible for allocating memory to different processes.
- The operating system enables the user to create, copy, delete, move, rename a file.
- The operating system provide security for the data in the computer.
- The operating system provides networking to share of data between multiple systems.
Client-Server network operating system: It is the type of network operating system that allows
the users to access resources, functions, and applications through a common server or center hub
of the resources. The client workstation can access all resources that exist in the central hub of the
network. Multiple clients can access and share different types of the resource over the network
from different locations.
- Resource abstraction is the process of "hiding the details of how the hardware
operates, thereby making computer hardware relatively easy for an application
programmer to use".
- One way in which the operating system might implement resource abstraction is to
provide a single abstract disk interface which will be the same for both the hard disk
and floppy disk.
- Such an abstraction saves the programmer from needing to learn the details of both
hardware interfaces. Instead, the programmer only needs to learn the disk
abstraction provided by the operating system.
- Resources include the Central Processing Unit (CPU), Memory, File storage,
Input/Output (I/O) devices, and Network connections.
- In process abstraction, details of the threads of execution are not visible to the
consumer of the process.
- While making the hardware easier to use, resource abstraction also limits the
specific level of control over the hardware by hiding some functionality behind the
abstraction.
- Since most application programmers do not need such a high level of control, the
abstraction provided by the operating system is generally very useful.
System Call:
Process control : end, abort, create, terminate, allocate and free memory.
File management : create, open, close, delete, read file etc.
Device management
Information maintenance
Communication
Windows Unix
Process control CreateProcess() fork()
ExitProcess() exit()
WaitForSingleObject() wait()
File Manipulation CreateFile() open()
ReadFile() read()
WriteFile() write()
CloseHandle() close()
File Modification –
For modifying the contents of files we use this.
Programming-Language support –
Compilers, Assemblers, Debuggers, and interpreters which are provided to users.
Communications –
Virtual connections among processes, users, and computer systems are provided by
system programs. Users can send messages to another user on their screen.
User View :
The user view depends on the system interface that is used by the users. The different types
of user view experiences can be explained as follows :
If the user is using a personal computer, the operating system is largely designed to make
the interaction easy. Some attention is also paid to the performance of the system, but
there is no need for the operating system to worry about resource utilization. This is
because the personal computer uses all the resources available.
If the user is using a system connected to a mainframe or a minicomputer, the operating
system makes sure that all the resources such as CPU, Memory, I/O devices etc. are
divided uniformly between the systems in the network.
If the user is sitting on a workstation connected to other workstations through networks,
then the operating system needs to focus on both individual usage of resources and
sharing though the network.
If the user is using a handheld computer such as a mobile, then the operating system
handles the usability of the device including a few remote operations. The battery level of
the device is also taken into account.
There are some devices that contain very less or no user view because there is no interaction
with the users. Examples are embedded computers in home devices, automobiles etc.
1. Resource Allocation
The hardware contains several resources like registers, RAM, ROM, CPUs, I/O interaction
etc. These are all resources that the operating system needs when an application program
demands them.
Only the operating system can allocate resources, and it has used several tactics and
strategies to maximize its processing and memory space.
The operating system uses a variety of strategies to get the most out of the hardware
resources, including paging, virtual memory, caching, and so on.
These are very important in the case of various user viewpoints because inefficient
resource allocation may affect the user viewpoint, causing the user system to lag or hang,
reducing the user experience.
2. Control Program
The control program controls how input and output devices (hardware) interact with the
operating system.
The user may request an action that can only be done with I/O devices; in this case, the
operating system must also have proper communication, control, detect, and handle such
devices.
Kernel Mode:
The kernel is the core program on which all the other operating system components rely.
It is used to access the hardware components and schedule which processes should run on a
computer system and when, and it also manages the application software and hardware
interaction.
Hence it is the most privileged program, unlike other programs it can directly interact with the
hardware.
When programs running under user mode need hardware access for example like webcam, then
first it has to go through the kernel by using a syscall, and to carry out these requests the CPU
switches from user mode to kernel mode at the time of execution.
After finally completing the execution of the process the CPU again switches back to the user
mode.
Kernel-mode vs In kernel mode, the program has direct and In user mode, the application
User mode unrestricted access to system resources. program executes and starts out.
In Kernel mode, the whole operating system In user mode, a single process fails if
Interruptions might go down if an interrupt occurs an interrupt occurs.
Virtual address In kernel mode, all processes share a single In user mode, all processes get
space virtual address space. separate virtual address space.
In kernel mode, the applications have more While in user mode the applications
Level of privilege privileges as compared to user mode. have fewer privileges.
As kernel mode can access both the user While user mode needs to access
programs as well as the kernel programs kernel programs as it cannot directly
Restrictions there are no restrictions. access them.
Process Abstraction:
Processes are the most fundamental operating system abstraction.
Processes organize information about other abstractions and represent a single thing that
the computer is “doing.”
We know processes as applications or programs which are under execution.
Abstraction means displaying only essential information and hiding the details.
Process abstraction refers to providing only essential information about the data to the
outside world, hiding the background details or implementation.
Unlike threads, address spaces and files, processes are not tied to a hardware component.
Instead, they contain other abstractions.
Processes contain:
Process Hierarchy:
Now-a-days all operating systems permit a user to create and destroy processes.
A process can create several new processes during its time of execution.
The creating process is called Parent Process and the new process is called Child Process.
There are different ways for creating a new process. These are as follows −
Execution − The child process is executed by the parent process concurrently or it waits till
all children get terminated.
Sharing − The parent or child process shares all resources like memory or files or children
process shares a subset of parent’s resources or parent and children process share no
resource in common.
05. What are the different types of threads? Explain benefits of threads.
Types of Threads:
In the operating system, there are two types of threads.
Components of Threads
Any thread has the following components.
1. Program counter
2. Register set
3. Stack space
Benefits of Threads
Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases.
Effective Utilization of Multiprocessor system: When you have more than one thread in
one process, you can schedule more than one thread in more than one processor.
Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the CPU.
Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
Communication: Multi-thread communication is simple because the threads share the
same address space, while in process, we adopt just a few strategies for communication
between two processes.
Resource sharing: Resources can be shared between all threads within a process.
What if the resources had been allotted to the cancel target thread?
What if the target thread is terminated when it was updating the data, it was sharing with
some other thread.
Here the asynchronous cancellation of the thread where a thread immediately cancels the
target thread without checking whether it is holding any resources or not create troubles.
However, in deferred cancellation, the thread that indicates the target thread about the
cancellation, the target thread crosschecks its flag in order to confirm that it should it be
cancelled immediately or not. The thread cancellation takes place where they can be
cancelled safely such points are termed as cancellation points by Pthreads (POSIX threads).
3. Signal Handling
Signal handling is more convenient in the single-threaded program as the signal would be
directly forwarded to the process. But when it comes to multithreaded program, the issue
arrives to which thread of the program the signal should be delivered.
How the signal would be delivered to the thread would be decided, depending upon the
type of generated signal. The generated signal can be classified into two types:
synchronous signal and asynchronous signal.
If the signal is synchronous it would be delivered to the specific thread causing the
generation of the signal. If the signal is asynchronous it cannot be specified to which
thread of the multithreaded program it would be delivered.
We are also concerned about the time it will take to create a new thread. It must not be
that case that the time require to create a new thread is more than the time required by
the thread to service the request and then getting discarded as it would result in wastage
of CPU time.
The solution to this issue is the thread pool. The idea is to create a finite amount of
threads when the process starts. This collection of threads is referred to as the thread
pool. The threads stay in the thread pool and wait till they are assigned any request to be
serviced.
We all are aware of the fact that the threads belonging to the same process share the data
of that process. Here the issue is what if each particular thread of the process needs its
own copy of data. So the specific data associated with the specific thread is referred to as
thread-specific data.
So these are threading issues that occur in the multithreaded programming environment.
The threads extension of the POSIX standard, may be provided as either a user- or kernel-
level library.
The Win32 thread library is a kernel-level library available on Windows systems.
The Java thread API allows thread creation and management directly in Java programs.
However, because in most instances the JVM is running on top of a host operating system,
the Java thread API is typically implemented using a thread library available on the host
system.
This means that on Windows systems, Java threads are typically implemented using the
Win32 API; UNIX and Linux systems often use Pthreads.
P2 8–2=6
P3 16 – 3 = 13
P2 2 8 14
P3 3 6 8
P2 14 – 2 = 12
P3 8–3=5
Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
Each process is assigned a priority. Process with highest priority is to be executed first and
so on.
Processes with same priority are executed on first come first served basis.
Priority can be decided based on memory requirements, time requirements or any other
resource requirement.
Given: Table of processes, and their Arrival time, Execution time, and priority. Here we are
considering 1 is the lowest priority.
Process Arrival Time Execution Time Priority Service Time
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
P2 14 – 2 = 12
P3 5–3=2
P2 2 8
P3 3 6
P3 (9 - 3) + (17 - 12) = 11
Deadlock Characterization:
A deadlock happens in operating system when two or more processes need some resource
to complete their execution that is held by the other process.
A deadlock occurs if the four Coffman conditions hold true. They are given as follows:
Mutual Exclusion
Hold & Wait
No Preemption
Circular Wait
No Preemption
A resource cannot be preempted from a process by force.
A process can only release a resource voluntarily.
In the diagram below, Process 2 cannot preempt Resource 1 from Process 1.
It will only be released when Process 1 relinquishes it voluntarily after its execution is
complete.
02. Write about deadlock handling approaches. [or] Explain: Deadlock Prevention, Deadlock
Avoidance
1. Deadlock ignorance
It is the most popular method and it acts as if no deadlock and the user will restart.
As handling deadlock is expensive to be called of a lot of codes need to be altered which
will decrease the performance so for less critical jobs deadlock are ignored.
Ostrich algorithm is used in deadlock Ignorance. Used in windows, Linux etc.
2. Deadlock prevention
It means that we design such a system where there is no chance of having a deadlock.
This graph is also kind of graphical bankers' algorithm where a process is denoted by a
circle Pi and resources is denoted by a rectangle Rj.
Presence of a cycle in the resources allocation graph is necessary but not sufficient
condition for detection of deadlock. If the type of every resource has exactly one copy
than the presence of cycle is necessary as well as sufficient condition for detection of
deadlock.
This is in unsafe state if P1 request R2 and P2 request R1 then deadlock will occur.
2) Banker’s algorithm
The resource allocation graph algorithms not applicable to the system with multiple
instances of the type of each resource. So for this system Banker’s algorithm is used.
Here whenever a process enters into the system it must declare maximum demand
possible.
At runtime, we maintain some data structure like current allocation, current need, current
available etc.
Whenever a process requests some resources we first check whether the system is in a
safe state or not.
Algorithm:
Consider the following 3 processes with total resources for A=6, B=5, C=7, D=6
Then we check whether the system is in deadlock or not and find the safe sequence of
process.
Principles of Concurrency :
Today's technology, like multi-core processors and parallel processing, allows multiple
processes and threads to be executed simultaneously.
Multiple processes and threads can access the same memory space, the same declared
variable in code, or even read or write to the same file.
The amount of time it takes a process to execute cannot be simply estimated, and you
cannot predict which process will complete first, enabling you to build techniques to deal
with the problems that concurrency creates.
Interleaved and overlapping processes are two types of concurrent processes with the same
problems. It is impossible to predict the relative speed of execution, and the following
factors determine it:
Problems in Concurrency :
It's difficult to spot a programming error because reports are usually repeatable due to the
varying states of shared components each time the code is executed.
Advantages :
1. Better Performance
It enables resources that are not being used by one application to be used by another.
Disadvantages :
It is necessary to protect multiple applications from each other.
It is necessary to use extra techniques to coordinate several applications.
Additional performance overheads and complexities in OS are needed for switching
between applications.
In the above diagram, the entry section handles the entry into the critical section.
It acquires the resources needed for execution by the process.
The exit section handles the exit from the critical section. It releases the resources and also informs
the other processes that the critical section is free.
Mutual Exclusion
By Mutual Exclusion, we mean that if one process is executing inside critical section then the other
process must not enter into the critical section.
Progress
Progress means that if one process doesn't need to execute into critical section then it should not
stop other processes to get into the critical section.
Bounded Waiting
We should be able to predict the waiting time for every process to get into the critical section. The
process must not be endlessly waiting for getting into the critical section.
Architectural Neutrality
Our mechanism must be architectural natural. It means that if our solution is working fine on one
architecture then it should also run on the other ones as well.
1. Binary Semaphore :
This is also known as mutex lock. It can have only two values : 0 and 1. Its value is
initialized to 1. It is used to implement the solution of critical section problems with
multiple processes.
2. Counting Semaphore :
Its value can range over an unrestricted domain. It is used to control access to a resource
that has multiple instances.
1. P operation is also called wait, sleep, or down operation, and V operation is also called
signal, wake-up, or up operation.
2. Both operations are atomic and semaphore(s) is always initialized to one. Here atomic
means that variable on which read, modify and update happens at the same time with
no pre-emption i.e. in-between no read, modify and update or other operation is
performed that may change the variable.
3. A critical section is surrounded by both operations to implement process
synchronization. See the below image. The critical section of Process P is in between P
and V operation.
Now, let us see how it implements mutual exclusion. Let there be two processes P1 and
P2 and a semaphore s is initialized as 1.
Now if suppose P1 enters in its critical section then the value of semaphore s becomes 0.
Now if P2 wants to enter its critical section then it will wait until s > 0, this can only
happen when P1 finishes its critical section and calls V operation on semaphore s.
This way mutual exclusion is achieved. Look at the below image for details which is
Binary semaphore.
Limitations of Semaphores:
IPC is a type of mechanism usually provided by the operating system (or OS).
The main aim or goal of this mechanism is to provide communications in between several
processes.
In short, the inter communication allows a process letting another process know that some
event has occurred.
Let us now look at the general definition of inter-process communication, which will explain
the same thing that we have discussed above.
Definition
It is one of the essential parts of inter process communication. Typically, this is provided by
inter process communication control mechanisms, but sometimes it can also be controlled
by communication processes.
Mutual Exclusion:-
It is required that only one process can enter the critical section at a time.
This also helps in synchronization and creates a stable state to avoid the race condition.
Semaphore:-
Semaphore is a type of variable that usually controls the access to the shared resources by
several processes.
Semaphore is further divided into following two types:
Barrier:-
A barrier typically not allows an individual process to proceed unless all the processes does
not reach it.
It is used by many parallel languages, and collective routines impose barriers.
Spinlock:-
Spinlock is a type of lock as its name implies. The processes are trying to acquire the spinlock
waits or stays in a loop while checking that the lock is available or not.
It is known as busy waiting because even though the process active, the process does not
perform any functional operation (or task).
Pipes
Shared Memory
It can be referred to as a type of memory that can be used or accessed by multiple processes
simultaneously.
It is primarily used so that the processes can communicate with each other.
Therefore the shared memory is used by almost all POSIX and Windows operating systems
as well.
Message Queue
In general, several different messages are allowed to read and write the data to the message
queue.
In the message queue, the messages are stored or stay in the queue unless their recipients
retrieve them.
In short, we can also say that the message queue is very helpful in inter-process
communication and used by all operating systems.
To understand the concept of Message queue and Shared memory in more detail, let's take
a look at its diagram given below:
Direct Communication
In this type of communication process, usually, a link is created or established between two
communicating processes.
However, in every pair of communicating processes, only one link can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share a common
mailbox, and each pair of these processes shares multiple communication links.
These shared links can be unidirectional or bi-directional.
FIFO:-
There are numerous reasons to use inter-process communication for sharing the data. Here
are some of the most important reasons that are given below:
Process Synchronization :
When two or more process cooperates with each other, their order of execution must be
preserved otherwise there can be conflicts in their execution and inappropriate outputs can
be produced.
A cooperative process is the one which can affect the execution of other process or can be
affected by the execution of other process. Such processes need to be synchronized so that
their order of execution can be guaranteed.
The procedure involved in preserving the appropriate order of execution of cooperative
processes is known as Process Synchronization.
There are various synchronization mechanisms that are used to synchronize the processes.
Race Condition :
A Race Condition typically occurs when two or more threads try to read, write and possibly
make the decisions based on the memory that they are accessing concurrently.
Critical Section
The regions of a program that try to access shared resources and may cause race conditions
are called critical section.
To avoid race condition among the processes, we need to assure that only one process at a
time can execute within the critical section.
Producer-Consumer problem
Also known as the Bound-Buffer problem. In this problem, there is a buffer of n
slots, and each buffer is capable of storing one unit of data.
There are two processes that are operating on the buffer – Producer and Consumer.
The producer tries to insert data and the consumer tries to remove data.
If the processes are run simultaneously they will not yield the expected output.
The solution to this problem is creating two semaphores, one full and the other
empty to keep a track of the concurrent processes.
mutex =1
Full =0 // Initially, all slots are empty. Thus full slots are 0
Empty =n // All slots are empty initially
do{
//produce an item
wait(empty);
wait(mutex);
//place in buffer
signal(mutex);
signal(full);
}while(true);
When producer produces an item then the value of “empty” is reduced by 1 because one
slot will be filled now.
The value of mutex is also reduced to prevent consumer to access the buffer.
Now, the producer has placed the item and thus the value of “full” is increased by 1. The
value of mutex is also increased by 1 because the task of producer has been completed
and consumer can access the buffer.
Solution for Consumer –
do{
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
// consumes item
}while(true);
This problem occurs when many threads of execution try to access the same shared
resources at a time.
Some threads may read, and some may write.
If one of the people tries editing the file, no other person should be reading or writing at
the same time, otherwise changes will not be visible to him/her.
However if some person is reading the file, then others may read it at the same time.
Precisely in OS we call this situation as the readers-writers problem.
Problem parameters:
Solution:
Writer process:
do {
// writer requests for critical section
wait(wrt);
// performs the write
// leaves the critical section
signal(wrt);
} while(true);
do {
// Reader wants to enter the critical section
wait(mutex);
// The number of readers has now increased by 1
readcnt++;
// there is atleast one reader in the critical section
// this ensure no writer can enter if there is even one reader
// thus we give preference to readers here
if (readcnt==1)
wait(wrt);
// other readers can enter while this current reader is inside
// the critical section
signal(mutex);
// current reader performs reading here
wait(mutex); // a reader wants to leave
readcnt--;
// that is, no reader is left in the critical section,
if (readcnt == 0)
signal(wrt); // writers can enter
signal(mutex); // reader leaves
} while(true);
The addresses identify a location in the memory where the actual code resides in the system
in the operating system.
We store the data in the memory at different locations with addresses to access the data
again whenever required in the future.
There are two types of addresses used for memory in the operating system, i.e., the physical
address and logical address.
The logical address is a virtual address viewed by the user. The user can't view the physical
address directly.
The logical address is used as a reference to access the physical address.
The fundamental difference between logical and physical addresses is that the CPU
generates the logical address during program execution. In contrast, the physical address
refers to a location in the memory unit.
A logical address is an address that is generated by the CPU during program execution. The
logical address is a virtual address as it does not exist physically, and therefore, it is also
known as a Virtual Address.
This address is used as a reference to access the physical memory location by CPU.
The term Logical Address Space is used to set all logical addresses generated from a
program's perspective.
A logical address usually ranges from zero to maximum (max). The user program that
generates the logical address assumes that the process runs on locations between 0 and
max. This logical address (generated by CPU) combines with the base address generated
by the MMU to form the physical address.
The hardware device called Memory-Management Unit is used for mapping logical
addresses to their corresponding physical address.
The physical address identifies the physical location of required data in memory.
The user never directly deals with the physical address but can access it by its corresponding
logical address.
The user program generates the logical address and thinks it is running in it, but the program
needs physical memory for its execution.
Therefore, the logical address must be mapped to the physical address by MMU before they
are used.
The basic difference between Logical and physical addresses is that The CPU generates a
logical address from a program's perspective.
In contrast, the physical address is a location that exists in the memory unit.
Logical Address Space is the set of all logical addresses generated by the CPU for a program.
In contrast, all physical addresses mapped to corresponding logical addresses are called
Physical Address Space.
The logical address does not exist physically in the memory, whereas a physical address is a
location in the memory that can be accessed physically.
Identical logical addresses are generated by Compile-time and Load time address binding
methods, whereas they differ in the run-time address binding method.
The CPU generates the logical address while the program is running, whereas the physical
address is computed by the Memory Management Unit (MMU).
There are some other differences between the logical and physical addresses, and let's
discuss them with the help of the below comparison table.
Address binding is the process of mapping from one address space to another address
space.
Logical addresses are generated by the CPU during execution, whereas physical address
refers to the location in a physical memory unit (the one loaded into memory).
Note that users deal only with logical addresses. The MMU translates the logical address.
The output of this process is the appropriate physical address of the data in RAM.
An address binding can be done in three different ways:
Compile Time: An absolute address can be generated if you know where a process will
reside in memory at compile time. That is, a physical address is generated in the program
executable during compilation.
Loading such an executable into memory is very fast.
But if another process occupies the generated address space, then the program crashes, and
it becomes necessary to recompile the program to use virtual address space.
Load Time: If it is not known at the compile time where the process will reside, then
relocated addresses will be generated.
The loader translates the relocated address to an absolute address. The base address of the
process in the main memory is added to all logical addresses by the loader to generate the
absolute address.
If the base address of the process changes, then we need to reload the process again.
Execution Time: The instructions are already loaded into memory and are processed by the
CPU. Additional memory may be allocated or reallocated at this time.
This process is used if the process can be moved from one memory to another during
execution (dynamic linking done during load or run time).
02. Write about Memory allocation strategies (fixed & variable partitions).
Memory Allocation :
Memory allocation is an action of assigning the physical or the virtual memory address
space to a process (its instructions and data). The two fundamental methods of memory
allocation are static and dynamic memory allocation.
Static memory allocation method assigns the memory to a process, before its execution.
On the other hand, the dynamic memory allocation method assigns the memory to a
process, during its execution.
Fixed Partitioning :
Example of Paging in OS
For example, if the main memory size is 16 KB and Frame size is 1 KB. Here, the main
memory will be divided into the collection of 16 frames of 1 KB each.
There are 4 separate processes in the system that is A1, A2, A3, and A4 of 4 KB each. Here,
all the processes are divided into pages of 1 KB each so that operating system can store
one page in one frame.
At the beginning of the process, all the frames remain empty so that all the pages of the
processes will get stored in a contiguous way.
In this example you can see that A2 and A4 are moved to the waiting state after some
time. Therefore, eight frames become empty, and so other pages can be loaded in that
empty blocks. The process A5 of size 8 pages (8 KB) are waiting in the ready queue.
In this example, you can see that there are eight non-contiguous frames which is available
in the memory, and paging offers the flexibility of storing the process at the different
places. This allows us to load the pages of process A5 instead of A2 and A4.
Advantages of Segmentation :
No Internal fragmentation.
Segment Table consumes less space in comparison to Page table in paging.
Disadvantage of Segmentation –
As processes are loaded and removed from the memory, the free memory space is
broken into little pieces, causing External fragmentation.
Demand Paging :
The process of loading the page into memory on demand (whenever page fault occurs) is
known as demand paging.
The process includes the following steps :
1. If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.
2. The OS puts the interrupted process in a blocking state. For the execution to proceed the
OS must bring the required page into the memory.
3. The OS will search for the required page in the logical address space.
4. The required page will be brought from logical address space to physical address space.
The page replacement algorithms are used for the decision-making of replacing the page
in physical address space.
5. The page table will be updated accordingly.
6. The signal will be sent to the CPU to continue the program execution and it will place the
process back into the ready state.
As shown in the above diagram, A program with 8 pages is stored in the disk. When this
program is executed, only 3 pages (A, C & F) are loaded into the physical memory in the
frames 4, 6 & 9.
Whenever another page wants to enter into the physical memory, already entered pages
will be replaced using page replacement algorithms.
Page Fault – A page fault happens when a running program accesses a memory page that is
mapped into the virtual address space, but not loaded in physical memory.
Since actual physical memory is much smaller than virtual memory, page faults happen. In case of
page fault, Operating System might have to replace one of the existing pages with the newly
needed page. Different page replacement algorithms suggest different ways to decide which page
to replace. The target for all algorithms is to reduce the number of page faults.
Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults when
increasing the number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots,
we get 9 total page faults, but if we increase slots to 4, we get 10 page faults.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
when 0 came, it is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest duration of time in
the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
Optimal page replacement is perfect, but not possible in practice as the operating system cannot
know future requests. The use of Optimal Page replacement is to set up a benchmark so that
other replacement algorithms can be analysed against it.
Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in
the memory.
Directory structure:
Directory can be defined as the listing of the related files on the disk. The directory may store some
or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes. The partitions are also called volumes or
mini disks.
Each partition must have at least one directory in which, all the files of the partition can be listed.
A directory entry is maintained for each file in the directory which stores all the information related
to that file.
A directory is a file which contains the Meta data (data about data) of the bunch of files.
Every Directory supports a number of common operations on the file:
1. Contiguous allocation
2. Linked list allocation
3. Indexed allocation
These methods provide quick access to the file blocks and also the utilization of disk space
in an efficient manner.
Contiguous Allocation:
Contiguous allocation is one of the most used methods for allocation. Contiguous
allocation means we allocate the block in such a manner, so that in the hard disk, all the
blocks get the contiguous physical block.
We can see in the below figure that in the directory, we have three files. In the table, we
have mentioned the starting block and the length of all the files. We can see in the table
that for each file, we allocate a contiguous block.
We can see in the below figure that we have a file named ‘jeep.’
The value of the start is 9. So, we have to start the allocation from the 9 th block, and blocks are
allocated in a random manner.
The value of the end is 25. It means the allocation is finished on the 25th block.
We can see in the below figure that the block (25) comprised of -1, which means a null pointer,
and it will not point to another block.
Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file.
Each file has its own index block. The i th entry in the index block contains the disk address of the
ith file block.
The directory entry contains the address of the index block as shown in the image:
Boot Device
It stores data in fixed-size blocks, each with its unique address. For example- Disks.
Character Device
It transmits or accepts a stream of characters, none of which can be addressed individually. For
instance, keyboards, printers, etc.
Network Device
It is used for transmitting the data packets.
1. The OS interacts with the device controllers via the device drivers while allocating the device to the
multiple processes executing on the system.
2. Device drivers can also be thought of as system software programs that bridge processes and device
controllers.
3. The device management function's other key job is to implement the API.
4. Device drivers are software programs that allow an operating system to control the operation of
numerous devices effectively.
5. The device controller used in device management operations mainly contains three registers:
command, status, and data.
Types of Buffering
There are three main types of buffering in the operating system, such as:
1. Single Buffer
In Single Buffering, only one buffer is used to transfer the data between two devices.
The producer produces one block of data into the buffer.
After that, the consumer consumes the buffer.
Only when the buffer is empty, the processor again produces the data.
3. Circular Buffer
When more than two buffers are used, the buffers' collection is called a circular buffer. Each buffer
is being one unit in the circular buffer. The data transfer rate will increase using the circular buffer
rather than the double buffering.
For example, one process might have the shared region starting at address 0x60000 while the
other process uses 0x70000. It is critical to understand that these two addresses refer to the exact
same piece of data. So storing the number 1 in the first process's address 0x60000 means the
second process has the value of 1 at 0x70000. The two different addresses refer to the exact same
location.
shmget() Function
The first parameter specifies the unique number (called key) identifying the shared segment. The
second parameter is the size of the shared segment, e.g., 1024 bytes or 2048 bytes. The third
parameter specifies the permissions on the shared segment.
On success, the shmget() function returns a valid identifier, while on failure, it returns -1.
Syntax
#include <sys/ipc.h>
#include <sys/shm.h>
int shmget (key_t key, size_t size, int shmflg);
#include <sys/types.h>
#include <sys/shm.h>
void *shmat(int shmid, const void *shmaddr, int shmflg);
It stands for 'first-come-first-serve'. As the name suggests, the request that comes first will be
processed first and so on. The requests coming to the disk are arranged in a proper sequence as
they arrive. Since every request is processed in this algorithm, so there is no chance of 'starvation'.
Explanation: In the above image, we can see the head starts at position 50 and moves to request
82. After serving them the disk arm moves towards the second request which is 170 and then to the
request 43 and so on. In this algorithm,, the disk arm will serve the requests in arriving order. In this
way, all the requests are served in arriving order until the process executes.
"Seek time" will be calculated by adding the head movement differences of all the requests:
Seek time= "(82-50) + (170-82) + (170-43) + (140-43) + (140-24) + (24-16) + (190-16) = 642
Example: Suppose a disk has 200 tracks (0-199). The request sequence(82,170,43,140,24,16,190)
are shown in the given figure and the head position is at 50.
Explanation: The disk arm searches for the request which will have the least difference in head
movement. So, the least difference is (50-43). Here the difference is not about the shortest value
but it is about the shortest time the head will take to reach the nearest next request. So, after 43,
the head will be nearest to 24, and from here the head will be nearest to request 16, After 16, the
nearest request is 82, so the disk arm will move to serve to request 82 and so on.
Hence, Calculation of Seek Time = (50-43) + (43-24) + (24-16) + (82-16) + (140-82) + (170-140) +
(190-170) = 208
In this algorithm, the head starts to scan all the requests in a direction and reaches the end of the
disk. After that, it reverses its direction and starts to scan again the requests in its path and serves
them. Due to this feature, this algorithm is also known as the "Elevator Algorithm".
Example: Suppose a disk has 200 tracks (0-199). The request sequence(82,170,43,140,24,16,190) is
shown in the given figure and the head position is at 50. The 'disk arm' will first move to the larger
values.
Hence, the Calculation of 'Seek Time' will be like: (199-50) + (199-16) =332
It stands for "Circular-Scan". This algorithm is almost the same as the Scan disk algorithm but one
thing that makes it different is that 'after reaching the one end and reversing the head direction, it
starts to come back. The disk arm moves toward the end of the disk and serves the requests coming
into its path.
After reaching the end of the disk it reverses its direction and again starts to move to the other end
of the disk but while going back it does not serve any requests.
Example: Suppose a disk having 200 tracks (0-199). The request sequence(82,170,43,140,24,16,190)
are shown in the given figure and the head position is at 50.
Explanation: In the above figure, the disk arm starts from position 50 and reached the end(199),
and serves all the requests in the path. Then it reverses the direction and moves to the other end of
the disk i.e.- 0 without serving any task in the path.
After reaching 0, it will again go move towards the largest remaining value which is 43. So, the head
will start from 0 and moves to request 43 serving all the requests coming in the path. And this
process keeps going.
In this algorithm, the disk arm moves to the 'last request' present and services them. After reaching
the last requests, it reverses its direction and again comes back to the starting point. It does not go
to the end of the disk, in spite, it goes to the end of requests.
Explanation: The disk arm is starting from 50 and starts to serve requests in one direction only but
in spite of going to the end of the disk, it goes to the end of requests i.e.-190. Then comes back to
the last request of other ends of the disk and serves them. And again starts from here and serves till
the last request of the first side. Hence, Seek time =(190-50) + (190-16) =314
The C-Look algorithm is almost the same as the Look algorithm. The only difference is that after
reaching the end requests, it reverses the direction of the head and starts moving to the initial
position. But in moving back, it does not serve any requests.
Example: Suppose a disk having 200 tracks (0-199). The request sequence(82,170,43,140,24,16,190)
are shown in the given figure and the head position is at 50.
Explanation: The disk arm starts from 50 and starts to serve requests in one direction only but in
spite of going to the end of the disk, it goes to the end of requests i.e.-190. Then comes back to the
last request of other ends of a disk without serving them. And again starts from the other end of
the disk and serves requests of its path.
Hence, Seek Time =(190−50)+(190−16)+(43−16)=341=(190−50)+(190−16)+(43−16)=341
KRISHNA UNIVERSITY
B.Sc DEGREE (CBCS) EXAMINATION
(Examination at the end of Third Semester)
OPERATING SYSTEMS
Model Paper 1
SECTION A – (5 x 4 = 20 marks)
Answer any FIVE of the following questions.
SECTION B – (5 x 10 = 50 marks)
Answer following questions.
UNIT 1 UNIT IV
9. Write about evolution of operating systems. 15. Write about Paging & Segmentation.
(or) (or)
10. Explain different types of operating systems. 16. Write about Virtual Memory.
UNIT II UNIT V
11. Explain threading issues & thread libraries. 17. Explain File Allocation Methods.
(or) (or)
12. Explain process scheduling algorithms. 18. Disk scheduling algorithms.
UNIT III
KRISHNA UNIVERSITY
B.Sc DEGREE (CBCS) EXAMINATION
(Examination at the end of Third Semester)
OPERATING SYSTEMS
Model Paper 2
SECTION A – (5 x 4 = 20 marks)
Answer any FIVE of the following questions.
SECTION B – (5 x 10 = 50 marks)
Answer following questions.
UNIT 1 UNIT IV
9. Write about History of operating systems. 15. Write about Paging & Segmentation.
(or) (or)
10. Functions of operating systems. 16. Write about Page Replacement Algorithms.
UNIT II UNIT V
11. Explain about different types of threads. 17. Explain Contiguous & Linked list File Allocation
(or) Methods.
12. Explain preemptive & non preemptive (or)
scheduling algorithms. 18. Explain FCFS & SSTF scheduling algorithms.
UNIT III