Unit - II OS
Unit - II OS
When a Program is booted up on an Operating system let’s say Windows, then it launches the
program in user mode. When a user-mode program requests to run, a process and virtual
address space (address space for that process) are created for it by Windows. User-
mode programs are less privileged than kernel-mode applications and are not allowed to
access the system resources directly. For instance, if an application under user mode wants to
access system resources, it will have to first go through the Operating system kernel by using
syscalls.
In user mode, applications run with limited privileges to prevent direct access to hardware,
ensuring system stability. In kernel mode, the operating system has unrestricted access to all
hardware resources, enabling it to perform critical tasks such as memory management and
process control.
Advantages
Disadvantages
Performance Overhead
Limited Access
The kernel is the core program on which all the other operating system components rely, it is
used to access the hardware components and schedule which processes should run on a
computer system and when, and it also manages the application software and hardware
interaction. Hence it is the most privileged program, unlike other programs, it can directly
interact with the hardware. When programs running under user mode need hardware access
for example webcam, then first it has to go through the kernel by using a syscall, and to carry
out these requests the CPU switches from user mode to kernel mode at the time of execution.
After finally completing the execution of the process the CPU again switches back to the user
mode.
Advantages
Disadvantages
Increased Risk
Complex Debugging
System Call
A system call is a mechanism used by programs to request services from the operating system
(OS). In simpler terms, it is a way for a program to interact with the underlying system, such
as accessing hardware resources or performing privileged operations.
A user program can interact with the operating system using a system call. A number of
services are requested by the program, and the OS responds by launching a number of systems
calls to fulfill the request. A system call can be written in high-level languages like C or
Pascal or in assembly language. If a high-level language is used, the operating system may
directly invoke system calls, which are predefined functions.
A system call is initiated by the program executing a specific instruction, which triggers a
switch to kernel mode, allowing the program to request a service from the OS. The OS then
handles the request, performs the necessary operations, and returns the result back to the
program.
System calls are essential for the proper functioning of an operating system, as they provide a
standardized way for programs to access system resources. Without system calls, each
program would need to implement its methods for accessing hardware and system services,
leading to inconsistent and error-prone behavior.
Device Handling(I/O)
Protection
Networking, etc.
o Process Control: end, abort, create, terminate, allocate, and free memory.
o Device Management
o Information Maintenance
o Communication
Interface: System calls provide a well-defined interface between user programs and the
operating system. Programs make requests by calling specific functions, and the operating
system responds by executing the requested service and returning a result.
Protection: System calls are used to access privileged operations that are not available to
normal user programs. The operating system uses this privilege to protect the system from
malicious or unauthorized access.
Kernel Mode: When a system call is made, the program is temporarily switched from
user mode to kernel mode. In kernel mode, the program has access to all system
resources, including hardware, memory, and other processes.
Context Switching: A system call requires a context switch, which involves saving the
state of the current process and switching to the kernel mode to execute the requested
service. This can introduce overhead, which can impact system performance.
Error Handling: System calls can return error codes to indicate problems with the
requested service. Programs must check for these errors and handle them appropriately.
Synchronization: System calls can be used to synchronize access to shared resources,
such as files or network connections. The operating system provides synchronization
mechanisms, such as locks or semaphores, to ensure that multiple programs can access
these resources safely.
Users need special resources: Sometimes programs need to do some special things that
can’t be done without the permission of the OS like reading from a file, writing to a file,
getting any information from the hardware, or requesting a space in memory.
The program makes a system call request: There are special predefined instructions to
make a request to the operating system. These instructions are nothing but just a “system
call”. The program uses these system calls in its code when needed.
Operating system sees the system call: When the OS sees the system call then it
recognizes that the program needs help at this time so it temporarily stops the program
execution and gives all the control to a special part of itself called ‘Kernel’. Now ‘Kernel’
solves the need of the program.
The operating system performs the operations: Now the operating system performs the
operation that is requested by the program. Example: reading content from a file etc.
Operating system give control back to the program: After performing the special
operation, OS give control back to the program for further execution of program.
System Programs
System Programming can be defined as the act of building Systems Software using System
Programming Languages. According to Computer Hierarchy, Hardware comes first then is
Operating System, System Programs, and finally Application Programs. Program
Development and Execution can be done conveniently in System Programs. Some of the
System Programs are simply user interfaces, others are complex. It traditionally sits between
the user interface and system calls.
In the context of an operating system, system programs are nothing but a special software
which give us facility to manage and control the computer’s hardware and resources. As we
have mentioned earlier these programs are more closely with the operating system so it
executes the operation fast and helpful in performing essential operation which can’t be
handled by application software.
Note: The user can only view up
up-to-the System Programs he cannot see System Calls.
Here are the examples of System Programs:
1. File Management: A file is a collection of specific information stored in the memory of a
computer system. File management is defined as the process of manipulating files in the
computer system, its management includes the process of creating, modifying and
deleting files.
2. Command Line Interface (CLI’s) : CLIs is the essential tool for user . It provide user
facility to write commands directly to the system for performing any operation . It is a
text-based way to interact with operating system. CLIs can perform many tasks like file
manipulation, system configuration and etc.
3. Device drivers: Device drivers work as a simple translator for OS and devices . Basically
it act as an intermediatory between the OS and devices and provide facility to both OS
and devices to understand each other’s language so that they can work together efficiently
without interrupt.
4. Status Information: Information like date, time amount of available memory, or disk
space is asked by some users. Others provide detailed performance, logging, and
debugging information which is more complex. All this information is formatted and
displayed on output devices or printed. Terminal or other output devices or files or a
window of GUI is used for showing the output of programs.
5. File Modification: This is used for modifying the content of files. Files stored on disks or
other storage devices, we use different types of editors. For searching contents of files or
perform transformations of files we use special commands.
6. Programming-Language
Language support: For common programming languages, we use
Compilers, Assemblers, Debuggers, and interpreters which are already provided to users.
It provides all support to users. We can run any programming language. All important
languages are provided.
7. Program Loading and Execution: When the program is ready after Assembling and
compilation, it must be loaded into memory for execution. A loader is part of an operating
system that is responsible for loading programs and libraries. It is one of the essential
stages for starting a program. Loaders, reloadable loaders, linkage editors, and Overlay
loaders are provided by the system.
8. Communications: Connections among processes, users, and computer systems are
provided by programs. Users can send messages to another user on their screen, User can
send e-mail, browsing on web pages, remote login, the transformation of files from one
user to another.
System View
The OS may also be viewed as just a resource allocator. A computer system comprises various
sources, such as hardware and software, which must be managed effectively. The operating
system manages the resources, decides between competing demands, controls the program
execution, etc. According to this point of view, the operating system's purpose is to maximize
performance. The operating system is responsible for managing hardware resources and
allocating them to programs and users to ensure maximum performance.
From the user point of view, we've discussed the numerous applications that require varying
degrees of user participation. However, we are more concerned with how the hardware interacts
with the operating system than with the user from a system viewpoint. The hardware and the
operating system interact for a variety of reasons, including:
1. Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O
interaction, etc. These are all resources that the operating system needs when an application
program demands them. Only the operating system can allocate resources, and it has used several
tactics and strategies to maximize its processing and memory space. The operating system uses a
variety of strategies to get the most out of the hardware resources, including paging, virtual
memory, caching, and so on. These are very important in the case of various user viewpoints
because inefficient resource allocation may affect the user viewpoint, causing the user system to
lag or hang, reducing the user experience.
2. Control Program
The control program controls how input and output devices (hardware) interact with the
operating system. The user may request an action that can only be done with I/O devices; in this
case, the operating system must also have proper communication, control, detect, and handle
such devices.
Process Abstraction
Process abstraction is a fundamental operating system (OS) abstraction that hides the details of
threads of execution and represents a single thing the computer is doing. Processes are also
known as applications.
In general, abstraction in computing is the process of hiding the complexity and details of a
system and presenting a simplified view to the user or application. This makes the system more
generic and easier to understand.
APIs are sets of functions, protocols, and data structures that define how applications can interact
with the OS. Applications can use APIs to access devices like keyboards, monitors, and disk
drives without needing to know the specifics of the hardware or OS.
Concurrency scheduler
Process hierarchy
In an operating system, a process is a program that is being executed. During its execution, a
process goes through different states. Understanding these states helps us see how the
operating system manages processes, ensuring that the computer runs efficiently.
There must be a minimum of five states. Even though the process could be in one of these
states during execution, the names of the states are not standardized. Each process goes
through several stages throughout its life cycle. In this article, We discuss different states in
detail.
New State: In this step, the process is about to be created but not yet created. It is the
program that is present in secondary memory that will be picked up by the OS to create
the process.
Ready State: New -> Ready to run. After the creation of a process, the process enters the
ready state i.e. the process is loaded into the main memory. The process here is ready to
run and is waiting to get the CPU time for its execution. Processes that are ready for
execution by the CPU are maintained in a queue called a ready queue for ready processes.
Run State: The process is chosen from the ready queue by the OS for execution and the
instructions within the process
cess are executed by any one of the available processors.
Blocked or Wait State: Whenever the process requests access to I/O needs input from
the user or needs access to a critical region(the lock for which is already acquired) it
enters the blocked or waits state. The process continues to wait in the main memory and
does not require CPU. Once the I/O operation is completed the process goes to the ready
state.
Suspend Ready: Process that was initially in the ready state but was swapped out of main
memory(refer to Virtual Memory topic) and placed onto external storage by the scheduler
is said to be in suspend ready state. The process will transition back to a ready state
whenever the process is again brought onto the main memory.
Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process which
was performing I/O operat
operation and lack of main memory caused them to move to
secondary memory. When work is finished it may go to suspend ready.
CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations,
then it is called CPU bound process. Similarly, If the process is intensive in terms of I/O
operations then it is called I/O bound process.
How Does a Process Move From One State to Other State?
A process can move between different states in an operating system based on its execution
status and resource availability. Here are some examples of how a process can move between
different states:
New to Ready: When a process is created, it is in a new state. It moves to the ready state
when the operating system has allocated resources to it and it is ready to be executed.
Ready to Running: When the CPU becomes available, the operating system selects a
process from the ready queue depending on various scheduling algorithms and moves it to
the running state.
Running to Blocked: When a process needs to wait for an event to occur (I/O operation
or system call), it moves to the blocked state. For example, if a process needs to wait for
user input, it moves to the blocked state until the user provides the input.
Blocked to Ready: When the event a blocked process was waiting for occurs, the process
moves to the ready state. For example, if a process was waiting for user input and the
input is provided, it moves to the ready state.
In a process, a thread refers to a single sequential activity being executed. these activities are
also known as thread of execution or thread control. Now, any operating system process can
execute a thread. we can say, that a process can have multiple threads.
Why Do We Need Thread?
Threads run in parallel improving the application performance. Each such thread has its
own CPU state and stack, but they share the address space of the process and the
environment.
Threads can share common data so they do not need to use inter-process communication.
Like the processes, threads also have states like ready, executing, blocked, etc.
Priority can be assigned to the threads just like the process, and the highest priority thread
is scheduled first.
Each thread has its own Thread Control Block (TCB). Like the process, a context switch
occurs for the thread, and register contents are saved in (TCB). As threads share the same
address space and resources, synchronization is also required for the various activities of
the thread.
Components of Threads
User Level Thread is a type of thread that is not created using system calls. The kernel has no
work in the management of user
user-level threads. User-level threads can be easily implemented
by the user. In case when user--level threads are single-handed processes, kernel-level
kernel thread
manages them. Let’s look at the advantages and disadvantages of User-Level Thread.
A kernel Level Thread is a type of thread that can recognize the Operating system easily.
Kernel Level Threads has its own thread table where it keeps track of the system. The
operating System Kernel helps in managing threads. Kernel Threads have somehow longer
context switching time. Kernel helps in the management of threads.
Disadvantages of Kernel-Level
Level threads
Threading Issues in OS
System Call
Thread Cancellation
Signal Handling
Thread Pool
Thread Specific Data
Process Scheduling
Process scheduling is the activity of the process manager that handles the removal of the
running process from the CPU and the selection of another process based on a particular
strategy.
Categories of Scheduling
Non-Preemptive: In this case, a process’s resource cannot be taken before the process
has finished running. When a running process finishes and transitions to a waiting state,
resources are switched.
In order for a process execution to be continued from the same point at a later time, context
switching is a mechanism to store and restore the state or context of a CPU in the Process
Control block. A context switcher makes it possible for multiple processes to share a single
CPU using this method. A multitasking operating system must include context switching
among its features.
The state of the currently running process is saved into the process control block when the
scheduler switches the CPU from executing one process to another. The state used to set the
computer, registers, etc. for the process that will run next is then loaded from its own PCB.
After that, the second can start processing.
Context Switching
In order for a process execution to be continued from the same point at a later time, context
switching is a mechanism to store and restore the state or context of a CPU in the Process
Control block. A context switcher makes it possible for multiple processes to share a single
CPU using this method. A multitasking operating system must include context switching
among its features.
Program Counter
Scheduling information
Changed State
Accounting information
Non – Preemptive
FCFS (FIRST-COME, FIRST-SERVED) Scheduling
Arrival time (AT) − Arrival time is the time at which the process arrives in ready
queue.
Burst time (BT) or CPU time of the process − Burst time is the unit of time in
which a particular process completes its execution.
Completion time (CT) − Completion time is the time at which the process has been
terminated.
Turn-around time (TAT) − The total time from arrival time to completion time is
known as turn-around time. TAT can be written as,
Turn-around time (TAT) = Completion time (CT) – Arrival time (AT) or, TAT =
Burst time (BT) + Waiting time (WT)
Waiting time (WT) − Waiting time is the time at which the process waits for its
allocation while the previous process is in the CPU for execution. WT is written as,
Problem 1
Consider the given table below and find Completion time (CT), Turn-
around time (TAT), Waiting time (WT), Response time (RT), Average
Turn-around time and Average Waiting time.
P1 2 2
P2 5 6
P3 0 4
P4 0 7
P5 7 4
Solution
Gantt chart
For this problem CT, TAT, WT, RT is shown in the given table −
Shortest Job first has the advantage of having a minimum average waiting time among
all scheduling algorithms.
It is a Greedy Algorithm.
It may cause starvation if shorter processes keep coming. This problem can be solved
using the concept of ageing.
It is practically infeasible as Operating System may not know burst times and therefore
may not sort them. While it is not possible to predict execution time, several methods can
be used to estimate the execution time for a job, such as a weighted average of previous
execution times.
SJF can be used in specialized environments where accurate estimates of running time are
available.
Algorithm:
Example-1: Consider the following table of arrival time and burst time for five processes P1,
P2, P3, P4 and P5.
P1 6 ms 2 ms
P2 2 ms 5 ms
P3 8 ms 1 ms
P4 3 ms 0 ms
P5 4 ms 4 ms
The Shortest Job First CPU Scheduling Algorithm will work on the basis of steps as
mentioned below:
At time = 0,
Process P4 arrives and starts executing
At time= 1,
Process P3 arrives.
But, as P4 still needs 2 execution units to complete.
Thus, P3 will wait till P4 gets executed.
At time =2,
Process P1 arrives and is added to the waiting table
P4 will continue its execution.
At time = 3,
Process P4 will finish its execution.
Then, the burst time of P3 and P1 is compared.
Process P1 is executed because its burst time is less as compared to P3.
At time = 4,
Process P5 arrives and is added to the waiting Table.
P1 will continue execution.
At time = 5,
Process P2 arrives and is added to the waiting Table.
P1 will continue execution.
At time = 6,
Process P1 will finish its execution.
The burst time of P3, P5, and P2 is compared.
Process P2 is executed because its burst time is the lowest among all.
At time=9,
Process P2 is executing and P3 and P5 are in the waiting Table.
At time = 11,
The execution of Process P2 will be done.
The burst time of P3 and P5 is compared.
Process P5 is executed because its burst time is lower than P3.
At time = 15,
Process P5 will finish its execution.
At time = 23,
Process P3 will finish its execution.
The overall execution of the processes will be as shown below:
Gantt chart
Now, let’s calculate the average waiting time for above example:
P4 = 0 – 0 = 0
P1 = 3 – 2 = 1
P2 = 9 – 5 = 4
P5 = 11 – 4 = 7
P3 = 15 – 1 = 14
Average Waiting Time = 0 + 1 + 4 + 7 + 14/5 = 26/5 = 5.2