Lecture Note Operating System
Lecture Note Operating System
An operating system acts as an intermediary between the user of a computer and the computer hardware. The
purpose of an operating system is to provide an environment in which a user can execute programs in a
convenient and efficient manner. An operating system is software that manages the computer hard- ware. The
hardware must provide appropriate mechanisms to ensure the correct operation of the computer system and to
prevent user programs from interfering with the proper operation of the system. Internally, operating systems
vary greatly in their makeup, since they are organized along many different lines. The design of a new operating
system is a major task. It is impotent that the goals of the system be well defined before the design begins. These
goals form the basis for choices among various algorithms and strategies. Because an operating system is large
and complex, it must be created piece by piece. Each of these pieces should be a well delineated portion of the
system, with carefully defined inputs, outputs, and functions.
The purpose of an operating system is to provide an environment in which a user can execute
programs in a convenient and efficient manner. An operating system acts as an intermediary between the
user of a computer and the computer hardware.
An operating system is a program on which application programs are executed and acts as a
communication bridge (interface) between the user and the computer hardware. The main task an
operating system carries out is the allocation of resources and services, such as the allocation of
memory, devices, processors, and information. The operating system also includes programs to
manage these resources, such as a traffic controller, a scheduler, a memory management module, I/O
programs, and a file system.
Operating System is used as a communication channel between the Computer hardware and the user.
It works as an intermediate between System Hardware and End-User. Operating System handles the
following responsibilities:
Operating system
Controls and coordinates use of hardware among various applications and users
Application programs – define the ways in which the system resources are used to solve the computing
problems of the users
Users:
Memory Management
The operating system manages the Primary Memory or Main Memory. Main memory is made up of a
large array of bytes or words where each byte or word is assigned a certain address. Main memory is
fast storage and it can be accessed directly by the CPU. For a program to be executed, it should be
first loaded in the main memory. An operating system manages the allocation and de-allocation of the
memory to various processes and ensures that the other process does not consume the memory
allocated to one process.An Operating System performs the following activities for Memory
Management:
It keeps track of primary memory, i.e., which bytes of memory are used by which user program. The
memory addresses that have already been allocated and the memory addresses of the memory that has
not yet been used.
In multi programming, the OS decides the order in which processes are granted memory access, and
for how long.
It allocates the memory to a process when the process requests it and de-allocates the memory when
the process has terminated or is performing an I/O operation.
Processor Management
In a multi-programming environment, the OS decides the order in which processes have access to the
processor, and how much processing time each process has. This function of OS is called Process
Scheduling. An Operating System performs the following activities for Processor Management.
An operating system manages the processors work by allocating various jobs to it and ensuring that
each process receives enough time from the processor to function properly.
Keeps track of the status of processes. The program which performs this task is known as a traffic
controller. Allocates the CPU that is a processor to a process. De-allocates processor when a process is
no more required.
Device Management
An OS manages device communication via its respective drivers. It performs the following activities
for device management. Keeps track of all devices connected to the system. designates a program
responsible for every device known as the Input/Output controller. Decides which process gets access
to a certain device and for how long. Allocates devices effectively and efficiently.Deallocates devices
when they are no longer required. There are various input and output devices. an OS controls the
working of these input-output devices .It receives the requests from these devices, performs a specific
task, and communicates back to the requesting process.
File Management
A file system is organized into directories for efficient or easy navigation and usage. These directories
may contain other directories and other files. An Operating System carries out the following file
management activities. It keeps track of where information is stored, user access settings, the status of
every file, and more. These facilities are collectively known as the file system. An OS keeps track of
information regarding the creation, deletion, transfer, copy, and storage of files in an organized way. It
also maintains the integrity of the data stored in these files, including the file directory structure, by
protecting against unauthorized access.
The user interacts with the computer system through the operating system. Hence OS act as an
interface between the user and the computer hardware. This user interface is offered through a set of
commands or a graphical user interface (GUI). Through this interface, the user makes interaction with
the applications and the machine hardware.
Security
The operating system uses password protection to protect user data and similar other techniques. it
also prevents unauthorized access to programs and user data. The operating system provides various
techniques which assure the integrity and confidentiality of user data. Following security measures are
used to protect user data:
Overall system health to help improve performance. Records the response time between service
requests and system response to having a complete view of the system’s health. This can help improve
performance by providing important information needed to troubleshoot problems.
Job Accounting
The operating system Keeps track of time and resources used by various tasks and users, this
information can be used to track resource usage for a particular user or group of users. In a
multitasking OS where multiple programs run simultaneously, the OS determines which applications
should run in which order and how time should be allocated to each application.
Error-Detecting Aids
The operating system constantly monitors the system to detect errors and avoid malfunctioning
computer systems. From time to time, the operating system checks the system for any external threat
or malicious software activity. It also checks the hardware for any type of damage. This process
displays several alerts to the user so that the appropriate action can be taken against any damage
caused to the system.
Operating systems also coordinate and assign interpreters, compilers, assemblers, and other software
to the various users of the computer systems.
The management of various peripheral devices such as the mouse, keyboard, and printer is carried out
by the operating system. Today most operating systems are plug-and-play. These operating systems
automatically recognize and configure the devices with no user interference.
Network Management
Network Communication: Think of them as traffic cops for your internet traffic. Operating systems
help computers talk to each other and the internet. They manage how data is packaged and sent over
the network, making sure it arrives safely and in the right order.
Settings and Monitoring: Think of them as the settings and security guard for your internet
connection. They also let you set up your network connections, like Wi-Fi or Ethernet, and keep an
eye on how your network is doing. They make sure your computer is using the network efficiently and
securely, like adjusting the speed of your internet or protecting your computer from online threats.
Operating system is divided into four generations, which are explained as follows −
It is the beginning of the development of electronic computing systems which are substitutes for
mechanical computing systems. Because of the drawbacks in mechanical computing systems like, the
speed of humans to calculate is limited and humans can easily make mistakes. In this generation there
is no operating system, so the computer system is given instructions which must be done directly.
The Batch processing system was introduced in the second generation, where a job or a task that can
be done in a series, and then executed sequentially. In this generation, the computer system is not
equipped with an operating system, but several operating system functions exist like FMS and IBSYS.
The development of the operating system was developed to serve multiple users at once in the third
generation. Here the interactive users can communicate through an online terminal to a computer, so
the operating system becomes multi-user and multiprogramming.
In this generation the operating system is used for computer networks where users are aware of the
existence of computers that are connected to one another.
At this generation users are also comforted with a Graphical User Interface (GUI), which is an
extremely comfortable graphical computer interface, and the era of distributed computing has also
begun.
With the occurrence of new wearable devices like Smart Watches, Smart Glasses, VRGears, and
others, the demand for conventional operating systems has also increased.
And, with the onset of new devices like wearable devices, which includes Smart Watches, Smart
Glasses, VR gears etc, the demand for unconventional operating systems is also rising.
1. Simple Structure
2. Monolithic Structure
3. Layered Approach Structure
4. Micro-Kernel Structure
5. Exo-Kernel Structure
6. Virtual Machines
SIMPLE STRUCTURE
There are four layers that make up the MS-DOS operating system, and each has its own set of
features.
These layers include ROM BIOS device drivers, MS-DOS device drivers, application
programs, and system programs.
The MS-DOS operating system benefits from layering because each level can be defined
independently and, when necessary, can interact with one another.
If the system is built in layers, it will be simpler to design, manage, and update. Because of
this, simple structures can be used to build constrained systems that are less complex.
When a user program fails, the operating system as whole crashes.
Because MS-DOS systems have a low level of abstraction, programs and I/O procedures are
visible to end users, giving them the potential for unwanted access.
The entire operating system breaks if just one user program malfunctions.
Since the layers are interconnected, and in communication with one another, there is no
abstraction or data hiding.
The operating system's operations are accessible to layers, which can result in data tampering
and system failure.
MONOLITHIC STRUCTURE
The monolithic operating system controls all aspects of the operating system's operation, including
file management, memory management, device management, and operational operations.
The core of an operating system for computers is called the kernel (OS). All other System components
are provided with fundamental services by the kernel. The operating system and the hardware use it as
their main interface. When an operating system is built into a single piece of hardware, such as a
keyboard or mouse, the kernel can directly access all of its resources.
The monolithic operating system is often referred to as the monolithic kernel. Multiple programming
techniques such as batch processing and time-sharing increase a processor's usability. Working on top
of the operating system and under complete command of all hardware, the monolithic kernel performs
the role of a virtual computer. This is an old operating system that was used in banks to carry out
simple tasks like batch processing and time-sharing, which allows numerous users at different
terminals to access the Operating System.
Because layering is unnecessary and the kernel alone is responsible for managing all
operations, it is easy to design and execute.
Due to the fact that functions like memory management, file management, process
scheduling, etc., are implemented in the same address area, the monolithic kernel runs rather
quickly when compared to other systems. Utilizing the same address speeds up and reduces
the time required for address allocation for new processes.
The monolithic kernel's services are interconnected in address space and have an impact on
one another, so if any of them malfunctions, the entire system does as well.
It is not adaptable. Therefore, launching a new service is difficult.
LAYERED STRUCTURE
The OS is separated into layers or levels in this kind of arrangement. Layer 0 (the lowest layer)
contains the hardware, and layer 1 (the highest layer) contains the user interface (layer N). These
layers are organized hierarchically, with the top-level layers making use of the capabilities of the
lower-level ones.
The functionalities of each layer are separated in this method, and abstraction is also an option.
Because layered structures are hierarchical, debugging is simpler, therefore all lower-level layers are
debugged before the upper layer is examined. As a result, the present layer alone has to be reviewed
since all the lower layers have already been examined.
Work duties are separated since each layer has its own functionality, and there is some amount
of abstraction.
Debugging is simpler because the lower layers are examined first, followed by the top layers.
MICRO-KERNEL STRUCTURE
The operating system is created using a micro-kernel framework that strips the kernel of any
unnecessary parts. Systems and user applications are used to implement these optional kernel
components. So, Micro-Kernels is the name given to these systems that have been developed.
Each Micro-Kernel is created separately and is kept apart from the others. As a result, the system is
now more trustworthy and secure. If one Micro-Kernel malfunctions, the remaining operating system
is unaffected and continues to function normally.
EXOKERNEL
An operating system called Exokernel was created at MIT with the goal of offering application-level
management of hardware resources. The exokernel architecture's goal is to enable application-specific
customization by separating resource management from protection. Exokernel size tends to be
minimal due to its limited operability.
Because the OS sits between the programs and the actual hardware, it will always have an effect on
the functionality, performance, and breadth of the apps that are developed on it. By rejecting the idea
that an operating system must offer abstractions upon which to base applications, the exokernel
operating system makes an effort to solve this issue. The goal is to give developers as few restriction
on the use of abstractions as possible while yet allowing them the freedom to do so when necessary.
Because of the way the exokernel architecture is designed, a single tiny kernel is responsible for
moving all hardware abstractions into unreliable libraries known as library operating systems.
Exokernels differ from micro- and monolithic kernels in that their primary objective is to prevent
forced abstraction.
A decline in consistency
The hardware of our personal computer, including the CPU, disc drives, RAM, and NIC (Network
Interface Card), is abstracted by a virtual machine into a variety of various execution contexts based
on our needs, giving us the impression that each execution environment is a separate computer. A
virtual box is an example of it.
Using CPU scheduling and virtual memory techniques, an operating system allows us to execute
multiple processes simultaneously while giving the impression that each one is using a separate
processor and virtual memory. System calls and a file system are examples of extra functionalities that
a process can have that the hardware is unable to give. Instead of offering these extra features, the
virtual machine method just offers an interface that is similar to that of the most fundamental
hardware. A virtual duplicate of the computer system underneath is made available to each process.
We can develop a virtual machine for a variety of reasons, all of which are fundamentally connected
to the capacity to share the same underlying hardware while concurrently supporting various
execution environments, i.e., various operating systems.
Disk systems are the fundamental problem with the virtual machine technique. If the actual machine
only has three-disc drives but needs to host seven virtual machines, let's imagine that. It is obvious
that it is impossible to assign a disc drive to every virtual machine because the program that creates
virtual machines would require a sizable amount of disc space in order to offer virtual memory and
spooling. The provision of virtual discs is the solution.
The result is that users get their own virtual machines. They can then use any of the operating systems
or software programs that are installed on the machine below. Virtual machine software is concerned
with programming numerous virtual machines simultaneously into a physical machine; it is not
required to take into account any user-support software. With this configuration, it may be possible to
break the challenge of building an interactive system for several users into two manageable chunks.
Due to total isolation between each virtual machine and every other virtual machine, there are
no issues with security.
A virtual machine may offer architecture for the instruction set that is different from that of
actual computers.
Simple availability, accessibility, and recovery convenience.
Program execution
File Management
Memory Management
Process Management
Resource Management
User Interface
Networking
Error handling
Time Management
Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the
program into the memory after which it is executed. The order in which they are executed
depends on the CPU Scheduling Algorithms. A few are FCFS, SJF, etc. When the program is
in execution, the Operating System also handles deadlock i.e. no two processes come for
execution at the same time. The Operating System is responsible for the smooth execution of
both user and system programs. The Operating System utilizes various resources available for
the efficient running of all types of functionalities.
File Management
The operating system helps in managing files also. If a program needs access to a file, it is the
operating system that grants access. These permissions include read-only, read-write, etc. It
also provides a platform for the user to create, and delete files. The Operating System is
responsible for making decisions regarding the storage of all types of data or files, i.e, floppy
disk/hard disk/pen drive, etc. The Operating System decides how the data should be
manipulated and stored.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with
limited number of player . The team manager (OS) decide whether the upcoming player will
be in playing 11 ,playing 15 or will not be included in team , based on his performance . In
the same way, OS first check whether the upcoming program fulfil all requirement to get
memory space or not ,if all things good, it checks how much memory space will be sufficient
for program and then load the program into memory at certain location. And thus , it prevents
program from using unnecessary memory.
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking(execution) is really happen and chef as the (OS) who uses kitchen-
stove(CPU) to cook different dishes(program). The chef(OS) has to cook different
dishes(programs) so he ensure that any particular dish(program) does not take long
time(unnecessary time) and all dishes(programs) gets a chance to cooked(execution) .The
chef(OS) basically scheduled time for all dishes(programs) to run kitchen(all the system)
smoothly and thus cooked(execute) all the different dishes(programs) efficiently.
Security :OS keep our computer safe from an unauthorized user by adding security
layer to it. Basically, Security is nothing but just a layer of protection which protect
computer from bad guys like viruses and hackers. OS provide us defenses like
firewalls and anti-virus software and ensure good safety of computer and personal
information.
Privacy :OS give us facility to keep our essential information hidden like having a
lock on our door, where only you can enter and other are not allowed . Basically , it
respect our secrets and provide us facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating system that
manages resource sharing. It also manages the CPU time among processes using CPU
Scheduling Algorithms. It also helps in the memory management of the system. It also
controls input-output devices. The OS also ensures the proper use of all the resources
available by deciding which resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the
operating system through the command-line interface or graphical user interface or GUI. The
command interpreter executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to the
internet, sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices,
etc. It also ensures that an error does not occur frequently and fixes the errors. It also prevents
the process from coming to a deadlock. It also looks for any type of error or bugs that can
occur while any task. The well-secured OS sometimes also acts as a countermeasure for
preventing any sort of breach of the Computer System from any external source and probably
handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=>(simple queue), start(yellow)=>(ready queue),move(green)=>(under execution)
and this light (control) changes after a certain interval of time at each side of the
road(computer system) so that the cars(program) from all side of road move smoothly
without traffic.
What is a System Call?
A system call is a mechanism used by programs to request services from the operating system
(OS). In simpler terms, it is a way for a program to interact with the underlying system, such
as accessing hardware resources or performing privileged operations.
A user program can interact with the operating system using a system call. A number of
services are requested by the program, and the OS responds by launching a number of
systems calls to fulfil the request. A system call can be written in high-level languages like C
or Pascal or in assembly language. If a high-level language is used, the operating system may
directly invoke system calls, which are predefined functions.
A system call is initiated by the program executing a specific instruction, which triggers a
switch to kernel mode, allowing the program to request a service from the OS. The OS then
handles the request, performs the necessary operations, and returns the result back to the
program.
System calls are essential for the proper functioning of an operating system, as they provide a
standardized way for programs to access system resources. Without system calls, each
program would need to implement its methods for accessing hardware and system services,
leading to inconsistent and error-prone behaviour.
Device Handling(I/O)
Protection
Networking, etc.
o Process Control: end, abort, create, terminate, allocate, and free memory.
o Device Management
o Information Maintenance
o Communication
Protection: System calls are used to access privileged operations that are not
available to normal user programs. The operating system uses this privilege to protect
the system from malicious or unauthorized access.
Kernel Mode: When a system call is made, the program is temporarily switched from
user mode to kernel mode. In kernel mode, the program has access to all system
resources, including hardware, memory, and other processes.
Context Switching: A system call requires a context switch, which involves saving
the state of the current process and switching to the kernel mode to execute the
requested service. This can introduce overhead, which can impact system
performance.
Error Handling: System calls can return error codes to indicate problems with the
requested service. Programs must check for these errors and handle them
appropriately.
Users need special resources: Sometimes programs need to do some special things
that can’t be done without the permission of the OS like reading from a file, writing to
a file, getting any information from the hardware, or requesting a space in memory.
The program makes a system call request: There are special predefined instructions
to make a request to the operating system. These instructions are nothing but just a
“system call”. The program uses these system calls in its code when needed.
Operating system sees the system call: When the OS sees the system call then it
recognizes that the program needs help at this time so it temporarily stops the program
execution and gives all the control to a special part of itself called ‘Kernel’. Now
‘Kernel’ solves the need of the program.
The operating system performs the operations: Now the operating system performs
the operation that is requested by the program. Example: reading content from a file
etc.
Operating system give control back to the program :After performing the special
operation, OS give control back to the program for further execution of program .
Examples of a System Call in Windows and Unix
System calls for Windows and Unix come in many different forms. These are listed in the
table below as follows:
WaitForSingleObject() Wait()
Open()
CreateFile()
Read()
File manipulation ReadFile()
Write()
WriteFile()
Close()
SetConsoleMode() Ioctl()
WriteConsole() Write()
GetCurrentProcessID() Getpid()
Sleep() Sleep()
CreatePipe() Pipe()
MapViewOfFile() Mmap()
SetFileSecurity() Chmod()
SetSecurityDescriptorgroup() Chown()
Open(): Accessing a file on a file system is possible with the open() system call. It gives the
file resources it needs and a handle the process can use. A file can be opened by multiple
processes simultaneously or just one process. Everything is based on the structure and file
system.
Read(): Data from a file on the file system is retrieved using it. In general, it accepts three
arguments:
A description of a file.
A buffer for read data storage.
Wait(): In some systems, a process might need to hold off until another process has finished
running before continuing. When a parent process creates a child process, the execution of
the parent process is halted until the child process is complete. The parent process is stopped
using the wait() system call. The parent process regains control once the child process has
finished running.
Write(): Data from a user buffer is written using it to a device like a file. A program can
produce data in one way by using this system call. generally, there are three arguments:
A description of a file.
The amount of data that will be written from the buffer in bytes.
Fork(): The fork() system call is used by processes to create copies of themselves. It is one of
the methods used the most frequently in operating systems to create processes. When a parent
process creates a child process, the parent process’s execution is suspended until the child
process is finished. The parent process regains control once the child process has finished
running.
Exit(): A system call called exit() is used to terminate a program. In environments with
multiple threads, this call indicates that the thread execution is finished. After using the exit()
system function, the operating system recovers the resources used by the process.
Memory Management: System calls provide a way for programs to allocate and
deallocate memory, as well as access memory-mapped hardware devices.
Security: System calls provide a way for programs to access privileged resources,
such as the ability to modify system settings or perform operations that require
administrative permissions.
Performance Overhead: System calls involve switching between user mode and
kernel mode, which can slow down program execution.
Security Risks: Improper use or vulnerabilities in system calls can lead to security
breaches or unauthorized access to system resources.
System Programming can be defined as the act of building Systems Software using System
Programming Languages. According to Computer Hierarchy, Hardware comes first then is
Operating System, System Programs, and finally Application Programs. Program
Development and Execution can be done conveniently in System Programs. Some of the
System Programs are simply user interfaces, others are complex. It traditionally sits between
the user interface and system calls.
In the context of an operating system, system programs are nothing but special software
which gives us facility to manage and control the computer’s hardware and resources. Here
are the examples of System Programs:
2. Command Line Interface(CLI’s) : CLIs is the essential tool for user . It provide
user facility to write commands directly to the system for performing any operation .
It is a text-based way to interact with operating system. CLIs can perform many tasks
like file manipulation, system configuration and etc.
3. Device drivers :Device drivers work as a simple translator for OS and devices .
Basically it act as an intermediatory between the OS and devices and provide facility
to both OS and devices to understand each other’s language so that they can work
together efficiently without interrupt.
7. Program Loading and Execution : When the program is ready after Assembling and
compilation, it must be loaded into memory for execution. A loader is part of an
operating system that is responsible for loading programs and libraries. It is one of the
essential stages for starting a program. Loaders, relocatable loaders, linkage editors,
and Overlay loaders are provided by the system.
Module-2
Process
Ex: we write our computer programs in a text file and when we execute this program, it becomes a
process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data.
Stack
The process Stack contains the temporary data such as method/function parameters, return address
and local variables.
Heap
Text
This includes the current activity represented by the value of Program Counter and the contents of
the processor's registers.
Data
When a process executes, it passes through different states. These stages may differ in different
operating systems, and the names of these states are also not standardized.
New State: In this step, the process is about to be created but not yet created. It is the
program that is present in secondary memory that will be picked up by the OS to create the
process.
Ready State: New -> Ready to run. After the creation of a process, the process enters the
ready state i.e. the process is loaded into the main memory. The process here is ready to run
and is waiting to get the CPU time for its execution. Processes that are ready for execution by
the CPU are maintained in a queue called a ready queue for ready processes.
Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process are executed by any one of the available CPU cores.
Blocked or Wait State: Whenever the process requests access to I/O or needs input from the
user or needs access to a critical region(the lock for which is already acquired) it enters the
blocked or waits state. The process continues to wait in the main memory and does not
require CPU. Once the I/O operation is completed the process goes to the ready state.
Terminated or Completed State: Process is killed as well as PCB is deleted. The resources
allocated to the process will be released or de-allocated.
Suspend Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state.
Suspend Wait or Suspend Blocked: Like suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary
memory. When work is finished, it may go to suspend ready.
A process can move between different states in an operating system based on its execution status
and resource availability. Here are some examples of how a process can move between different
states:
New to Ready: When a process is created, it is in a new state. It moves to the ready state
when the operating system has allocated resources to it and it is ready to be executed.
Ready to Running: When the CPU becomes available, the operating system selects a process
from the ready queue depending on various scheduling algorithms and moves it to the
running state.
Running to Blocked: When a process needs to wait for an event to occur (I/O operation or
system call), it moves to the blocked state. For example, if a process needs to wait for user
input, it moves to the blocked state until the user provides the input.
Running to Ready: When a running process is preempted by the operating system, it moves
to the ready state. For example, if a higher-priority process becomes ready, the operating
system may preempt the running process and move it to the ready state.
Blocked to Ready: When the event a blocked process was waiting for occurs, the process
moves to the ready state. For example, if a process was waiting for user input and the input
is provided, it moves to the ready state.
A Process Control Block is a data structure maintained by the Operating System for every process.
The PCB is identified by an integer process ID (PID). A PCB keeps all the information needed to keep
track of a process as listed below in the table –
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Process privileges
This is required to allow/disallow access to system resources.
Process ID
Unique identification for each of the process in the operating system.
Pointer
A pointer to parent process.
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.
CPU registers
Various CPU registers where process need to be stored for execution for running state.
Process priority and other scheduling information which is required to schedule the process.
This includes the information of page table, memory limits, and Segment table depending on
memory used by the operating system.
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
IO status information
The architecture of a PCB is completely dependent on Operating System and may contain different
information in different operating systems. Here is a simplified diagram of a PCB −Process Control
Block
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
The process can be split down into so many threads. For example, in a browser, many tabs can be
viewed as threads. MS Word uses many threads - formatting text from one thread, processing input
from another thread, etc.
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new
process.
o Threads can share the common data, they do not need to use Inter- Process communication.
2. User-level thread.
User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the
user level thread. The kernel-level thread manages user-level threads as if they are single-threaded
processes?examples: Java thread, POSIX threads, etc.
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.
1. User-level threads lack coordination between the thread and the kernel.
The kernel thread recognizes the operating system. There is a thread control block and process
control block in the system for each thread and process in the kernel-level thread. The kernel-level
thread is implemented by the operating system. The kernel knows about all the threads and manages
them. The kernel-level thread offers a system call to create and manage the threads from user-space.
The implementation of kernel threads is more difficult than the user thread. Context switch time is
longer in the kernel thread. If a kernel thread performs a blocking operation, the Banky thread
execution can continue. Example: Window Solaris.
3. The kernel-level thread is good for those applications that block the frequency.
Components of Threads
1. Program counter
2. Register set
3. Stack space
Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases. That is why the
throughput of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the process
context switching. The process context switch means more overhead for the CPU.
o Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
o Resource sharing: Resources can be shared between all threads within a process, such as
code, data, and files. Note: The stack and register cannot be shared between threads. There
is a stack and register for each thread.
What is Multi-Threading?
A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads. MS
Word uses multiple threads: one thread to format the text, another thread to process inputs, etc.
More advantages of multithreading are discussed below.
Process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process based on a particular strategy.
Categories of Scheduling in OS
1. Non-preemptive: In non-preemptive, the resource can’t be taken from a process until the process
completes execution. The switching of resources occurs when the running process terminates and
moves to a waiting state.
2. Preemptive: In preemptive scheduling, the OS allocates the resources to a process for a fixed
amount of time. During resource allocation, the process switches from running state to ready state or
from waiting state to ready state. This switching occurs as the CPU may give priority to other
processes and replace the process with higher priority with the running process.
A long-term scheduler is a scheduler that is responsible for bringing processes from the JOB queue
(or secondary memory) into the READY queue (or main memory). In other words, a long-term
scheduler determines which programs will enter into the RAM for processing by the CPU.
Long-term schedulers are also called Job Schedulers. Long-term schedulers have a long-term effect
on the CPU performance. They are responsible for the degree of multi programming, i.e., managing
the total processes present in the READY queue. For example, time-sharing operating systems like
Windows and UNIX usually don't have a long term scheduler. These systems put all the processes in
the main memory for the short term scheduler.
Short-Term or CPU Scheduler
1. Short term scheduler is also known as CPU scheduler. It selects one of the Jobs from the
ready queue and dispatch to the CPU for the execution.
2. A scheduling algorithm is used to select which job is going to be dispatched for the
execution. The Job of the short term scheduler can be very critical in the sense that if it
selects job whose CPU burst time is very high then all the jobs after that, will have to wait in
the ready queue for a very long time.
3. This problem is called starvation which may arise if the short term scheduler makes some
mistakes while selecting the job.Switching context.
4. Switching to user mode.
5. Jumping to the proper location in the newly loaded program.
Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping (moving
processes from main memory to disk and vice versa). Swapping may be necessary to improve the
process mix or because a change in memory requirements has over committed available memory,
requiring memory to be freed up. It is helpful in maintaining a perfect balance between the I/O
bound and the CPU bound. It reduces the degree of multi programming.
I/O schedulers: I/O schedulers are in charge of managing the execution of I/O operations such as
reading and writing to discs or networks. They can use various algorithms to determine the order in
which I/O operations are executed, such as FCFS (First-Come, First-Served) or RR (Round Robin).
Real-time schedulers: In real-time systems, real-time schedulers ensure that critical tasks are
completed within a specified time frame. They can prioritize and schedule tasks using various
algorithms such as EDF (Earliest Deadline First) or RM (Rate Monotonic).
Context Switching
For a process execution to be continued from the same point at a later time, context switching is a
mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A
multitasking operating system must include context switching among its features.
The state of the currently running process is saved into the process control block when the scheduler
switches the CPU from executing one process to another. The state used to set the PC, registers, etc.
for the process that will run next is then loaded from its own PCB. After that, the second can start
processing.
For a process execution to be continued from the same point at a later time, context switching is a
mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A
multitasking operating system must include context switching among its features.
Program Counter
Scheduling information
The base and limit register value
Currently used register
Changed State
I/O State information
Accounting information
What are the different terminologies to take care of in any CPU Scheduling algorithm?
Arrival Time: Time at which the process arrives in the ready queue.
Completion Time: Time at which process completes its execution.
Burst Time: Time required by a process for CPU execution.
Turn Around Time: Time Difference between completion time and arrival time.
Turn Around Time = Completion Time – Arrival Time
Waiting Time(W.T): Time Difference between turnaround time and burst time.
Waiting Time = Turn Around Time – Burst Time
The first come first serve scheduling algorithm states that the process that requests the CPU
first is allocated the CPU first. It is implemented by using the FIFO queue. When a process
enters the ready queue, its PCB is linked to the tail of the queue. When the CPU is free, it is
allocated to the process at the head of the queue. The running process is then removed
from the queue. FCFS is a non-preemptive scheduling algorithm.
Characteristics of FCFS
FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
Tasks are always executed on a First-come, First-serve concept.
FCFS is easy to implement and use.
This algorithm is not very efficient in performance, and the wait time is quite
high.
Algorithm for FCFS Scheduling
The waiting time for the first process is 0 as it is executed first.
The waiting time for the upcoming process can be calculated by:
wt[i] = ( at[i – 1] + bt[i – 1] + wt[i – 1] ) – at[i]
where
wt[i] = waiting time of current process
at[i-1] = arrival time of previous process
bt[i-1] = burst time of previous process
wt[i-1] = waiting time of previous process
at[i] = arrival time of current process
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Advantages of FCFS
The simplest and basic form of CPU Scheduling algorithm
Easy to implement
First come first serve method
It is well suited for batch systems where the longer time periods for each process
are often acceptable.
Disadvantages of FCFS
As it is a Non-preemptive CPU Scheduling Algorithm, hence it will run till it
finishes the execution.
The average waiting time in the FCFS is much higher than in the others
It suffers from the Convoy effect.
Not very efficient due to its simplicity
Processes that are at the end of the queue, have to wait longer to finish.
It is not suitable for time-sharing operating systems where each process should
get the same amount of CPU time.
Example
In the Example, there are 7 processes P1, P2, P3, P4, P5, P6 and P7. Their
priorities, Arrival Time and burst time are given in the table.
1 2 0 3
2 6 2 5
3 3 1 4
4 5 4 2
5 7 6 9
6 4 5 4
7 10 7 10
We can prepare the Gantt chart according to the Non Preemptive priority
scheduling.
The Process P1 arrives at time 0 with the burst time of 3 units and the priority
number 2. Since No other process has arrived till now hence the OS will schedule
it immediately.
Meanwhile the execution of P1, two more Processes P2 and P3 are arrived. Since
the priority of P3 is 3 hence the CPU will execute P3 over P2.
Meanwhile the execution of P3, All the processes get available in the ready
queue. The Process with the lowest priority number will be given the priority.
Since P6 has priority number assigned as 4 hence it will be executed just after
P3.
After P6, P4 has the least priority number among the available processes; it will
get executed for the whole burst time.
Since all the jobs are available in the ready queue hence All the Jobs will get
executed according to their priorities. If two jobs have similar priority number
assigned to them, the one with the least arrival time will be executed.
From the GANTT Chart prepared, we can determine the completion time of every
process. The turnaround time, waiting time and response time will be
determined.
1 2 0 3 3 3 0 0
2 6 2 5 18 16 11 13
3 3 1 4 7 6 2 3
4 5 4 2 13 9 7 11
5 7 6 9 27 21 12 18
6 4 5 4 11 6 2 7
7 10 7 10 37 30 18 27
Once all the jobs get available in the ready queue, the algorithm will behave as
non-preemptive priority scheduling, which means the job scheduled will run till
the completion and no preemption will be done.
Example
There are 7 processes P1, P2, P3, P4, P5, P6 and P7 given. Their respective
priorities, Arrival Times and Burst times are given in the table below.
1 2(L) 0 1
2 6 1 7
3 3 2 3
4 5 3 6
5 4 4 5
6 10(H) 5 15
7 9 15 8
The Next process P3 arrives at time unit 2, the priority of P3 is higher to P2.
Hence the execution of P2 will be stopped and P3 will be scheduled on the CPU.
During the execution of P3, three more processes P4, P5 and P6 becomes
available. Since, all these three have the priority lower to the process in
execution so PS can't preempt the process. P3 will complete its execution and
then P5 will be scheduled with the priority highest among the available
processes.
Meanwhile the execution of P5, all the processes got available in the ready
queue. At this point, the algorithm will start behaving as Non Preemptive Priority
Scheduling. Hence now, once all the processes get available in the ready queue,
the OS just took the process with the highest priority and execute that process
till completion. In this case, P4 will be scheduled and will be executed till the
completion.
Since P4 is completed, the other process with the highest priority available in the
ready queue is P2. Hence P2 will be scheduled next.
P2 is given the CPU till the completion. Since its remaining burst time is 6 units
hence P7 will be scheduled after this.
The only remaining process is P6 with the least priority, the Operating System
has no choice unless of executing it. This will be executed at the last.
The Completion Time of each process is determined with the help of GANTT
chart. The turnaround time and the waiting time can be calculated by the
following formula.
1 2 0 1 1 1 0
2 6 1 7 22 21 14
3 3 2 3 5 3 0
4 5 3 6 16 13 7
5 4 4 5 10 6 1
6 10 5 15 45 40 25
7 9 6 8 30 24 16