Operating System Answers
Operating System Answers
Introduction
An operating system is a set of programs that enables a user to operate and interact
with a computer. Examples of operating systems are Linux distributions, windows, mac
os, FreeBSD, etc. There are many types of operating systems. In this article, we will
discuss various classifications of operating systems.
A batch monitor is started for executing all pooled jobs, after reading them. These jobs are
divided into groups, and finally, precede the same jobs in a similar batch. Now all batched jobs
are ready for execution one by one, and due to this system enhances the system utilization while
decreasing the turnaround time.
Advantages
● In a batch system, all jobs are performed in repeating form without the user’s
permission.
● Can be feed input data in the batch processing system without using extra
hardware components.
● Small scale businesses can use batch processing systems for executing small
tasks to their benefit.
● Forgiving rest to the system’s processors, your batch system is capable to work
in off-line mode.
● Batch processing system consumes less time for executing all jobs.
● Sharing of the batch system for multiple users is possible.
● The idle time of the batch system is very less.
● You can assign a specific time for the batch jobs so when the computer is idle it
starts processing the batch jobs.
● The batch systems can manage large repeated work easily.
Disadvantages
Examples
● Payroll System
● Bank Invoice System
● Transactions Process
● Daily Report
● Research Segment
● Billing System
Time-sharing Operating Systems
Time-sharing is a logical extension of multiprogramming. The CPU executes multiple jobs by
switching, among them, but the switches occur so frequently that the users can interact with
each program while it is running. An interactive computer provides direct communication
between the user and the system. The user gives instructions to the OS or a program directly,
using hardware, and waits for results.
A time-shared operating system uses CPU scheduling and multiprogramming to provide each
user with a small portion of a time-shared computer. Each user has at least one separate
program in memory. When a process executes, it executes for only a short time before it either
finishes or needs to perform input/output. In time-sharing operating systems several jobs must
be kept simultaneously in memory, so the system must have memory management and
protection.
Advantages
Disadvantages
● Reliability problem.
● One must have to take of security and integrity of user programs and data.
● Data communication problem.
Examples
An RTOS can be a powerful tool if you’re creating complex embedded programs. They help
isolate tasks and give you the ability to run them concurrently. You can set prioritization levels of
tasks in most RTOSes, which allow some tasks to interrupt and run before other tasks. This is
known as “preemption.” If you need concurrency or are getting into deeper embedded concepts
like IoT or machine learning, it's wise to add RTOSes and multi-threaded programming to your
toolkit.
Advantages
● Priority-Based Scheduling.
● Abstracting Timing Information.
● Maintainability/Extensibility.
● Modularity.
● Promotes Team Development.
● Easier Testing.
● Code Reuse.
● Improved Efficiency.
● Idle Processing.
Disadvantages
● Limited Tasks.
● Use Heavy System resources.
● Complex Algorithms.
● Device driver and interrupt signals.
● Thread Priority.
Examples
1. Multitasking OS: Enables execution of multiple programs at the same time. The
operating system accomplishes this by swapping each program in and out of
memory one at a time. When a program is switched out of memory, it is
temporarily saved on disk until it is required again.
2. Multiuser Operating System: This allows many users to share processing time on
a powerful central computer from different terminals. The operating system
accomplishes this by rapidly switching between terminals, each of which
receives a limited amount of processor time on the central computer.
Advantages
Disadvantages
Examples
Following are four major components, used in the Multiprocessor Operating System:
1. CPU – capable to access memories as well as controlling the entire I/O tasks.
2. Input Output Processor – I/P processor can access direct memories, and every
I/O processors have to be responsible for controlling all input and output tasks.
3. Input/Output Devices – These devices are used for inserting the input
commands, and producing output after processing.
4. Memory Unit – Multiprocessor system uses the two types of memory modules -
shared memory and distributed shared memory.
Advantages
● Great Reliability.
● Improve Throughput.
● Cost-Effective System.
● Parallel Processing.
Disadvantages
1. File Management
Operating systems are responsible for managing the files on a computer. This includes
creating, opening, closing, and deleting files. The operating system is also responsible for
organizing the files on the disk.
Think of your computer as a project manager. A project manager manages the whole
team,checks the working of all the team members,provide resourses,facilitate things for
teams members in the same way operating system will be responsible for checking ongoing
processes,providing resources when required and ensuring that everything is in order. This
could also include managing which files and folders are stored on the computer and who
has access to them.
The OS also handles file permissions, which dictate what actions a user can take on a
particular file or folder. For example, you may have the ability to read a file but not edit or
delete it. This prevents unauthorized users from accessing or tampering with your files.
● Creating file
● Editing a file
● Updating a file
● Deleting a files
2. Device management
You can also use an operating system to install software and updates for your devices and
manage their security settings.
3. Process management
Each process is given a certain amount of time to execute, called a quantum. Once a
process has used its quantum, the operating system interrupts it and provides another
process with a turn. This ensures that each process gets a fair share of the CPU time.
One of the most critical functions of an operating system is memory management. This is
the process of keeping track of all different applications and processes running on your
computer and all the data they’re using.
When a computer starts up, the operating system loads itself into memory and then
manages all the other running programs. It checks how much memory is used and how
much is available and makes sure that executing programs do not interfere with each other.
5. Job Accounting
An operating system’s (OS) job accounting feature is a powerful tool for tracking how your
computer’s resources are being used. This information can help you pinpoint and
troubleshoot any performance issues and identify unauthorized software installations.
Operating systems keep track of which users and processes use how many resources. This
information can be used for various purposes, including keeping tabs on system usage,
billing users for their use of resources, and providing information to system administrators
about which users and processes are causing problems.
The operating system does the following tasks:
P1 6 2
P2 2 5
P3 8 1
P4 3 0
P5 4 4
Step 2) At time =2, process P1 arrives and is added to the waiting queue. P4
will continue execution.
Step 3) At time = 3, process P4 will finish its execution. The burst time of P3
and P1 is compared. Process P1 is executed because its burst time is less
compared to P3.
Step 6) At time = 9, process P1 will finish its execution. The burst time of P3,
P5, and P2 is compared. Process P2 is executed because its burst time is the
lowest.
Step 7) At time=10, P2 is executing, and P3 and P5 are in the waiting queue.
Step 8) At time = 11, process P2 will finish its execution. The burst time of P3
and P5 is compared. Process P5 is executed because its burst time is lower.
Step 11) Let’s calculate the average waiting time for above example.
Wait time
P4= 0-0=0
P1= 3-2=1
P2= 9-5=4
P5= 11-4=7
P3= 15-1=14
Average Waiting Time= 0+1+4+7+14/5 = 26/5 =
5.2
P1 4
P2 3
P3 5
Step 1) The execution begins with process P1, which has burst time 4. Here,
every process executes for 2 seconds. P2 and P3 are still in the waiting
queue.
Step 2) At time =2, P1 is added to the end of the Queue and P2 starts
executing
Step 7) Let’s calculate the average waiting time for above example.
Wait time
P1= 0+ 4= 4
P2= 2+4= 6
P3= 4+3= 7
Deadlock in Operating System
A deadlock in OS is a situation in which more than one process is blocked because it
is holding a resource and also requires some resource that is acquired by some
other process. The four necessary conditions for a deadlock situation to occur are
mutual exclusion, hold and wait, no preemption and circular set. We can prevent a
deadlock by preventing any one of these conditions. There are different ways to
detect and recover a system from deadlock.
● Mutual Exclusion: Only one process can use a resource at any given time i.e. the
resources are non-sharable.
● Hold and wait: A process is holding at least one resource at a time and is waiting
to acquire other resources held by some other process.
● No preemption: The resource can be released by a process voluntarily i.e. after
execution of the process.
Example
In the above figure, there are two processes and two resources. Process 1 holds "Resource
1" and needs "Resource 2" while Process 2 holds "Resource 2" and requires "Resource 1".
This creates a situation of deadlock because none of the two processes can be executed.
Since the resources are non-shareable they can only be used by one process at a
time(Mutual Exclusion). Each process is holding a resource and waiting for the other
process the release the resource it requires. None of the two processes releases their
resources before their execution and this creates a circular wait. Therefore, all four
conditions are satisfied.
Deadlock Prevention
This is done by restraining the ways a request can be made. Since deadlock occurs when all
the above four conditions are met, we try to prevent any one of them, thus preventing a
deadlock.
Deadlock Avoidance
When a process requests a resource, the deadlock avoidance algorithm examines the
resource-allocation state. If allocating that resource sends the system into an unsafe state,
the request is not granted.
Therefore, it requires additional information such as how many resources of each type is
required by a process. If the system enters into an unsafe state, it has to take a step back to
avoid deadlock.
Deadlock Ignorance
In the method, the system assumes that deadlock never occurs. Since the problem of
deadlock situation is not frequent, some systems simply ignore it. Operating systems such
as UNIX and Windows follow this approach. However, if a deadlock occurs we can reboot
our system and the deadlock is resolved automatically.
A state is safe if the system can allocate resources to each process( up to its maximum
requirement) in some order and still avoid a deadlock. Formally, a system is in a safe
state only, if there exists a safe sequence. So a safe state is not a deadlocked state and
conversely a deadlocked state is an unsafe state.
In an Unsafe state, the operating system cannot prevent processes from requesting
resources in such a way that any deadlock occurs. It is not necessary that all unsafe
states are deadlocks; an unsafe state may lead to a deadlock.
The above Figure shows the Safe, unsafe, and deadlocked state spaces
Deadlock Detection
In this method, the OS assumes that a deadlock will occur in the future. So it
runs a deadlock detection mechanism with a certain interval of time, and
when it detects the deadlock, it starts a recovery approach.
The main task of the OS is to detect the deadlock. There’re two methods of
detection which we’ve already covered before.
The main difference between a RAG and a wait-for graph is the number
of vertices each graph contains. A RAG graph has two vertices: resource
and process. A wait-for graph has one vertex: process.
In this scheme, User can load the bigger size processes than the available main memory
by having the illusion that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads the
different parts of more than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased.
Since all this procedure happens automatically, therefore it makes the computer feel like
it is having the unlimited RAM.
Demand Paging
Demand Paging is a popular method of virtual memory management. In demand
paging, the pages of a process which are least used, get stored in the secondary
memory.
A page is copied to the main memory when its demand is made or page fault occurs.
There are various page replacement algorithms which are used to determine the pages
which will be replaced. We will discuss each one of them later in detail.
3. The user will have the lesser hard disk space for its use.
Suppose the CPU wants to access process P1, divided into ten pages. So following the idea
of virtual memory, only P1, P3, P5, P6, and P8 Pages are selected to be loaded in the main
memory. For that have to consult the page table. First CPU will check whether that page has
a valid(v) or invalid bit(I). Valid bit indicates that the page is in main memory and invalid bit
indicates that the page is not in main memory and has to load from secondary memory.
Like from the page table, we can see that page 1 is at frame 1, page 3 is at frame 2, page 5
is in frame 3, and so on. If the page is not in the main memory, then those pages not in use
are swapped out, and a new required page is swapped in. In short virtual memory includes
the concept of demand paging and swapping.
Page Replacement in OS
Page replacement is needed in the operating systems that use virtual memory using
Demand Paging. As we know that in Demand paging, only a set of pages of a process is
loaded into the memory. This is done so that we can have more processes in the
memory at the same time.
When a page that is residing in virtual memory is requested by a process for its
execution, the Operating System needs to decide which page will be replaced by this
requested page. This process is known as page replacement and is a vital component in
virtual memory management.
To understand why we need page replacement algorithms, we first need to know about
page faults. Let’s see what is a page fault.
Page Fault: A Page Fault occurs when a program running in CPU tries to access a page
that is in the address space of that program, but the requested page is currently not
loaded into the main physical memory, the RAM of the system.
Since the actual RAM is much less than the virtual memory the page faults occur. So
whenever a page fault occurs, the Operating system has to replace an existing page in
RAM with the newly requested page. In this scenario, page replacement algorithms help
the Operating System in deciding which page to replace. The primary objective of all the
page replacement algorithms is to minimize the number of page faults.
In this algorithm, when a page fault occurs, then the page that has not been used for the longest
duration of time is replaced by the newly requested page.
Example: Let’s see the performance of the LRU on the same reference string of 3, 1, 2, 1, 6, 5, 1,
3 with 3-page frames:
● Initially, since all the slots are empty, pages 3, 1, 2 cause a page fault and take the
empty slots.
Page faults = 3
Page faults = 3
● When page 6 comes, it is not in the memory, so a page fault occurs and the least
recently used page 3 is removed.
Page faults = 4
● When page 5 comes, it again causes a page fault and page 1 is removed as it is
now the least recently used page.
Page faults = 5
● When page 1 comes again, it is not in the memory and hence page 2 is removed
according to the LRU.
Page faults = 6
● When page 3 comes, the page fault occurs again and this time page 6 is removed
as the least recently used one.
Total page faults = 7
Now in the above example, the LRU causes the same page faults as the FIFO, but this may not
always be the case as it will depend upon the series, the number of frames available in memory,
etc. In fact, on most occasions, LRU is better than FIFO.
Advantages
Disadvantages
Example: Let’s see how the LIFO performs for our example string of 3, 1, 2, 1, 6, 5, 1, 3 with
3-page frames:
● Initially, since all the slots are empty, page 3,1,2 causes a page fault and takes the
empty slots.
Page faults = 3
Page faults = 3
● When page 6 comes, the page fault occurs and page 2 is removed as it is on the
top of the stack and is the newest page.
Page faults = 4
● When page 5 comes, it is not in the memory, which causes a page fault, and
hence page 6 is removed being on top of the stack.
Page faults = 5
● When page 1 and page 3 come, they are in memory already, hence no page fault
occurs.
As you may notice, this is the same number of page faults as of the Optimal page replacement
algorithm. So we can say that for this series of pages, this is the best algorithm that can be
implemented without the prior knowledge of future references.
Advantages
● Simple to understand
● Easy to implement
● No overhead
Disadvantages
● Does not consider Locality principle, hence may produce worst performance
● The old pages may reside in memory forever even if they are not used
device.
● The Input-Output device data are also given to the Arithmetic Logical
Unit.
● The ALU operations are not directly applicable to such Input-Output data.
I/O mapped I/O Memory mapped I/O
The devices are provided with 8-bit address The devices are provided with 16-bit address
values. values.
For transferring information, the instructions Since the peripherals are treated as memory
used are IN and OUT. locations, all the instructions related to
memory such as LDA, STA etc. can be used.
I/O read or I/O write cycles are used to Memory read or Memory write cycles are
access the interfaced devices. used to access the interfaced devices.
The entire memory address space can be The entire memory address space cannot be
used solely for addressing memory for used solely for addressing memory for
interfacing. interfacing.
Only the Accumulator and an I/O device can Any register and an I/O device can be used
be used for data transfer. for data transfer.
The decoder hardware involved is less. The decoder hardware involved is more.
ALU operations cannot be performed directly ALU operations can be performed directly on
on the data. the data.
28 I/O ports are available for interfacing. 216 I/O ports are available for interfacing.
● Protection
● Security
●
Protection
Protection tackles the system's internal threats. It provides a mechanism for controlling access
to processes, programs, and user resources. In simple words, It specifies which files a specific
user can access or view and modify to maintain the proper functioning of the system. It allows
the safe sharing of common physical address space or common logical address space which
means that multiple users can access the memory due to the physical address space.
Security
Security tackles the system's external threats. The safety of their system
resources such as saved data, disks, memory, etc. is secured by the
security systems against harmful modifications, unauthorized access, and
inconsistency. It provides a mechanism (encryption and authentication) to
analyze the user before allowing access to the system.
Protection Security
Protection deals with who has access to Security gives the system access only to
the system resources. authorized users.
Protection tackles the system's internal Security tackles the system's external
threats. threats.
It specifies which files a specific user can It defines who is permitted to access the
access or view and modify. system.
Virus
Trojan Horse
Worm
Trap Door
A trap door is basically a back door into software that anyone can use to access any
system without having to follow the normal security access procedures. It may exist in a
system without the user's knowledge. As they're so hard to detect, trap doors need
programmers or developers to thoroughly examine all of the system's components in
order to find them.
Denial of Service
● Beware of suspicious emails and links: When we visit some malicious link over
the internet, it can cause a serious issue by acquiring user access.
● Use Secure Wi-Fi Only: Sometimes using free wifi or insecure wifi may cause
security issues, because attackers can transmit harmful programs over the
network or record the activity etc, which could cause a big problem in the worst
case.
● Install anti-virus and malware protection: It helps to remove and avoid viruses
and malware from the system.
● Manage access wisely: The access should be provided to apps and software by
thorough analysis because no software can harm our system until it acquires
access. So, we can ensure to provide suitable access to software and we can
always keep an eye on software to see what resources and access it is using.
● Firewalls Utilities: It enables us to monitor and filter network traffic. We can use
firewalls to ensure that only authorized users are allowed to access or transfer
data.
● Encryption and Decryption Based transfer: The data content must be transferred
according to an encryption algorithm that can only be reversed with the
appropriate decryption key. This process protects your data from unauthorized
access over the internet, also even if data is stolen it would always remain
unreadable.
1. Burst Mode: Here, once the DMA controller gains the charge of the
system bus, then it releases the system bus only after completion of
data transfer. Till then the CPU has to wait for the system buses.
2. Cycle Stealing Mode: In this mode, the DMA controller forces the CPU
to stop its operation and relinquish the control over the bus for a short
term to DMA controller. After the transfer of every byte, the DMA
controller releases the bus and then again requests for the system bus.
In this way, the DMA controller steals the clock cycle for transferring
every byte.
Disadvantages
1. Many I/O requests may arrive from different processes, and the disk
controller can only serve one I/O request at a time. As a result, other I/O
requests need to wait in the waiting queue and get scheduled.
2. The operating system needs to manage the hardware efficiently.
3. To reduce seek time.
To perform disk scheduling, we have six disk scheduling algorithms. These are-
The goal of the disk scheduling algorithm is-
Algorithm
To understand the C-Scan Algorithm, let us assume a disc queue with requests for I/O.
‘head’ is the position of the disk head. We will now apply C-Scan algorithm-
File concept in OS
What is the file?
The file can be explained as the smallest unit of storage on a computer
system. The user can perform file operations like open, close, read, write, and
modify.
File concept
The operating system can provide a logical view of the information stored in
the disks, this logical unit is known as a file. The information stored in files is
not lost during power failures.
● Executable file
In an executable file, the binary code that is loaded in the memory for
execution is stored. It is stored in an exe type file.
● Source file
The source file has subroutines and functions that are compiled later.
● Object file
● Text file
● Image file
● Global Table
● Capability Lists for Domains
● Access Lists for Objects
● Lock and key Method
Capability Lists
In the access matrix in the operating system, Capability Lists is a collection of
objects and the operations that can be performed on them. The object here is
specified by a physical name called capability. In this method, we can
associate each row with its domain instead of connecting the columns of the
access matrix to the objects as an access list.
File Allocation Methods in
Operating System
File Allocation Methods
There are different kinds of methods that are used to allocate disk space.
We must select the best method for the file allocation because it will
directly affect the system performance and system efficiency. With the help
of the allocation method, we can utilize the disk, and also files can be
accessed.
There are various types of file allocations method:
1. Contiguous allocation
2. Extents
3. Linked allocation
4. Clustering
5. FAT
6. Indexed allocation
7. Linked Indexed allocation
8. Multilevel Indexed allocation
9. Inode
There are different types of file allocation methods, but we mainly use three
types of file allocation methods:
1. Contiguous allocation
2. Linked list allocation
3. Indexed allocation
These methods provide quick access to the file blocks and also the
utilization of disk space in an efficient manner.
Contiguous Allocation: - Contiguous allocation is one of the most used
methods for allocation. Contiguous allocation means we allocate the block
in such a manner, so that in the hard disk, all the blocks get the contiguous
physical block.
We can see in the below figure that in the directory, we have three files. In
the table, we have mentioned the starting block and the length of all the
files. We can see in the table that for each file, we allocate a contiguous
block.
Example of contiguous allocation
We can see in the given diagram, that there is a file. The name of the file is
‘mail.’ The file starts from the 19th block and the length of the file is 6. So,
the file occupies 6 blocks in a contiguous manner. Thus, it will hold blocks
19, 20, 21, 22, 23, 24.
Indexed Allocation
The Indexed allocation method is another method that is used for file
allocation. In the index allocation method, we have an additional block, and
that block is known as the index block. For each file, there is an individual
index block. In the index block, the ith entry holds the disk address of the
ith file block. We can see in the below figure that the directory entry
comprises of the address of the index block.
There are various advantages of a network operating system. Some of them are as
follows:
2. New technologies, upgradation, and hardware may be easily integrated into this
operating system.
Disadvantages
There are various disadvantages of a network operating system. Some of them are as
follows:
3. The user must rely on the central location for most processes.
There are various advantages and disadvantages of the distributed operating system.
These are as follows:
Advantages
There are various advantages of the distributed operating system. Some of them are
as follows:
1. It may share all resources (CPU, disk, network interface, nodes, computers, and
etc.) from one site to another, increasing data availability across the entire
system.
2. The entire system operates independently of one another, and as a result, if one
site crashes, the entire system does not halt.
4. It is an open system since it may be accessed from both local and remote
locations.
5. It increases the speed of data exchange from one site to another site.
6. Most distributed systems are made up of several nodes that interact to make
them fault-tolerant. If a single machine fails, the system remains operational.
Disadvantages
There are various disadvantages of the distributed operating system. Some of them
are as follows:
1. The system must decide which jobs must be executed, when they must be
executed, and where they must be executed. A scheduler has limitations, which
can lead to underutilized hardware and unpredictable runtimes.
2. The underlying software is extremely complex and is not understood very well
compared to other systems.
4. The more widely distributed a system is, the more communication latency can
be expected. As a result, teams and developers must choose between
availability, consistency, and latency.
6. Gathering, processing, presenting, and monitoring hardware use metrics for big
clusters can be a real issue.
4. The network operating system's primary goal is to give local services to remote
users. In contrast, DOS's goal is to handle the computer hardware resources.
5. The network operating system has a low level of transparency. On the other
hand, the DOS is highly transparent and hides resource usage.
8. The network operating system maintains resources at each node, whereas the
distributed operating system manages resources globally, whether they are
centered or distributed.
Head-to-head comparison between network operating
system and distributed operating system
Difference between Job and Process
Job is work that needs to be done.
Job and task define the work to be done, whereas process defines the way
the work can be done or how the work should be done.
1. PROCESS :
● The process is a program under execution. A program can be
defined as a set of instructions.
The program is a passive entity and the process is an active entity.
When we execute a program, it remains on the hard drive of our
system and when this program comes into the main memory it
becomes a process.
The process can be present on a hard drive, memory or CPU.
Example –
In windows we can see each of the processes (running) in
windows task manager. All the processes running in the
background are visible under the processes tab in Task Manager.
Another example may be a printer program running in the
background while we perform some other task on screen. That
printer program will be called a process.
● A process goes through many states when it is executed. Some of
these states are start, ready, running, waiting or
terminated/executed. These names aren’t standardized. These
states are shown in the Process state transition diagram or
process life cycle.
● More than one process can be executed at the same time. When
multiple processes are executed at the same time, it needs to be
decided which process needs to be executed first. This is known
as scheduling of a process or process scheduling. Thus, a
process is also known as a schedulable and executable unit.
● A process has certain attributes and a process also has a process
memory. Attributes of process are process id, process state,
priority, etc.
A process memory is divided into 4 sections – text section, data
section, heap and stack.
● The process also facilities interprocess communication. When
multiple processes are executed, it is necessary for processes to
communicate using communication protocols to maintain
synchronization.
● To further dive into details of the process, you may refer to –
Introduction of process management.
3. JOB :
● A job is a complete unit of work under execution. A job consists of
many tasks which in turn, consist of many processes. A job is a
series of tasks in a batch mode. Programs are written to execute a
job.
● Job is also obscure as it too holds many meanings. Jobs and
tasks are used synonymously in computational work.
● Example – Job of a computer is taking input from the user,
process the data and provide with the results. This job can be
divided into several small tasks, taking input as one task,
processing the data as another task, outputting the results as yet
another task.
These tasks are further executed in small processes. The task of
taking input has a number of processes involved. First of all, the
user enters the information. Then that information is converted to
binary language. Then that information goes to the CPU for further
execution. Then the CPU performs necessary actions to be taken.
Hence, a job is broken into tasks and these tasks are executed in
the form of processes.
● A job may be one job at a time or multiple jobs at a time. A single
job can be called a task. To perform multiple jobs at a time a job
needs to be scheduled. A job scheduler is a kind of application
program that schedules jobs. A job scheduling is also known as
batch scheduling.
The concept of job, process and task revolves around each other. Job, task
and process may be considered the same or different in reference to the
context they refer to. A process is an isolated entity of Operating System. A
task may be called a process if it is a single task. A job may be called a
task if the job to be performed is a single unit of work. A process or group
of processes can be termed as a task and a group of tasks can be termed
as a job.
Advantages and disadvantages of
multiprogramming systems
It has to handle a
process within the
In this processing there
6 specified time limit
is no time limit.
otherwise the system
fails.
controls the order in which the work to be done is completed. The scheduling
algorithm are:
● CPU Utilization − A scheduling algorithm should be designed so that
interactive users.
● Waiting time − It is the time a job waits for resource allocation when
● Fairness − A good scheduler should make sure that each process gets
What is Segmentation?
It is a technique of memory management in which every job gets divided into various blocks of
varied sizes, known as segments. This way, we get one segment for every module with pieces
performing related functions. These segments act as different spaces of the logical address of any
program. While executing a process, the corresponding segmentations load into a non-contagious
form of memory. It happens even if every segmentation loads into the available memory’s
contagious block.
Memory Size The pages are blocks of The sections are blocks of
fixed size. varying sizes.
Types of Fragmentation
Fragmentation is of three types:
● External Fragmentation
● Internal Fragmentation
● Data Fragmentation (which exists beside or a combination)
Internal Fragmentation
Whenever a memory block gets allocated with a process, and in case the process happens to be
smaller than the total amount of requested memory, a free space is ultimately created in this
memory block. And due to this, the memory block’s free space is unused. This is what causes
internal fragmentation. Read more on Internal Fragmentation here.
External Fragmentation
External fragmentation occurs whenever a method of dynamic memory allocation happens to
allocate some memory and leave a small amount of unusable memory. The total quantity of the
memory available is reduced substantially in case there’s too much external fragmentation. So,
there’s enough memory space in order to complete a request, and it is not contiguous. Thus, it is
known as external fragmentation. Read more on External Fragmentation here.
Causes of Fragmentation
The user processes are unloaded and loaded from the main memory. Also, all the processes are
kept in the memory blocks in a system’s main memory. Various spaces are left after the loading
and swapping of processes that other processes can’t load because of their sizes. The main
memory is available, but the space isn’t sufficient in order to load other processes since the
allocation of the main memory processes is dynamic.
2. The long-term scheduler chooses the processes or jobs from the job pool. In
contrast, the short-term scheduler chooses the processes from the ready queue.
4. The long-term scheduler assigns the job to the ready queue for further action by
the short-term scheduler, which is referred to as a job scheduler. In contrast, the
short-term scheduler assigns the task to the CPU for its process; therefore, it is
also called a CPU Scheduler.
5. The short-term scheduler chooses processes from the ready queue more
frequently than the long-term scheduler chooses processes from the job pool.
chooses processes from the job queue chooses the process from the several
and loads them to execution in the main processes that the processor runs.
memory.
multiprogramming degree.
It selects the process less frequently. It selects the process more frequently.
It is always present in the Batch OS and It is present in the Batch OS and is only
It chooses the processes from the job It chooses the processes from the ready
pool. queue.
bound.
Process Control block (PCB) is a data structure that stores information of a process.
2. Process State:
A process, from its creation to completion goes through different states. Generally, a
process may be present in one of the 5 states during its execution:
3. Process Priority:
Process priority is a numeric value that represents the priority of each process. The
lesser the value, the greater the priority of that process. This priority is assigned at the
time of the creation of the PCB and may depend on many factors like the age of that
process, the resources consumed, and so on. The user can also externally assign a
priority to the process.
5. Program Counter:
The program counter is a pointer that points to the next instruction in the program to
be executed. This attribute of PCB contains the address of the next instruction to be
executed in the process.
6. CPU registers:
A CPU register is a quickly accessible small-sized location available to the CPU. These
registers are stored in virtual memory(RAM)
6. Context Switching:
A context switching is a process that involves switching the CPU from one process or
task to another. It is the process of storing the state of a process so that it can be
restored and resume execution at a later point. This allows multiple processes to share
a single CPU and is an essential feature of a multitasking operating system.
8. PCB pointer:
This field contains the address of the next PCB, which is in ready state. This helps the
operating system to hierarchically maintain an easy control flow between parent
processes and child processes.
Context Switching
A context switching is a process that involves switching the CPU from one process or
task to another. It is the process of storing the state of a process so that it can be
restored and resume execution at a later point. This allows multiple processes to share
a single CPU and is an essential feature of a multitasking operating system.
So, whenever context switching occurs in the code execution then the current state of
that process is stored temporarily in CPU registers. This helps in the fast execution of
the process by not wasting time-saving and retrieving state information from the
secondary memory(hard disk).
What is thrashing?
A state in which the CPU performs lesser “productive” work and more “swapping”
is known as thrashing.
It occurs when there are too many pages in the memory and each page refers to
another one.
The memory soon fills up and the process starts spending a lot of time for the
required pages to be swapped in, causing the utilization of the CPU to fall low, as
every process has to wait for pages.
Effect of thrashing?
When the operating system encounters a situation of thrashing then it tries to
apply the following algorithms:
Though, this is not a suitable algorithm as in this no process can get enough
frames causing more thrashing.
A hit ratio is a calculation of cache hits, and comparing them with how many total content
requests were received.
A miss ratio is the flip side of this where the cache misses are calculated and compared
with the total number of content requests that were received.
Waiting time is the amount of time a process has been waiting in the ready queue.
Response time is the amount of time it takes from when a request was submitted until the
first response is produced, not output (for time-sharing environment)