Short Notes Operating Systems
Short Notes Operating Systems
2
Operating System Jhulan Kumar
3
Operating System Jhulan Kumar
It is concerned with the operation and control of I/O devices.
✦ Storage allocation
✦ Disk scheduling
Networking (Distributed Systems)
• A distributed system is a collection processor that do not share memory or a clock. Each
processor has its own local memory.
• The processors in the system are connected through a communication network.
• Communication takes place using a protocol.
• A distributed system provides user access to various system resources.
• Access to a shared resource allows:
✦ Computation speed-up
✦ Enhanced reliability
Protection System
• Protection refers to a mechanism for controlling access by programs, processes, or users to
both system and user resources.
The protection mechanism must:
✦ distinguish between authorized and unauthorized usage.
✦ I/O handling
✦ secondary-storage management
✦ main-memory management
✦ file-system access
✦ protection
✦ networking
• The program that reads and interprets control statements is called variously:
✦ command-line interpreter
5
Operating System Jhulan Kumar
Operating-System Structures
• System Components
• Operating System Services
• System Calls
• System Programs
• System Structure
• Virtual Machines
• System Design and Implementation
• System Generation
Common System Components
• Process Management
• Main Memory Management
• File Management
• I/O System Management
• Secondary Management
• Networking
• Protection System
• Command-Interpreter System
Evolution of OS:
1. Mainframe Systems
Reduce setup time by batching similar jobs Automatic job sequencing – automatically transfers
control from one job to another. First rudimentary
operating system. Resident monitor
• initial control in monitor
• control transfers to job
• when job completes control transfers pack to monitor
2. Batch Processing Operating System:
• This type of OS accepts more than one jobs and these jobs are batched/ grouped together
according to their similar requirements. This is done by computer operator. Whenever the
computer becomes available, the batched jobs are sent for execution and gradually the
output is sent back to the user.
• It allowed only one program at a time.
• This OS is responsible for scheduling the jobs according to priority and the resource
required.
3. Multiprogramming Operating System:
• This type of OS is used to execute more than one jobs simultaneously by a single
processor. it increases CPU utilization by organizing jobs so that the CPU always has one
job to execute.
6
Operating System Jhulan Kumar
• The concept of multiprogramming is described as follows:
• All the jobs that enter the system are stored in the job pool( in disc). The operating system
loads a set of jobs from job pool into main memory and begins to execute.
• During execution, the job may have to wait for some task, such as an I/O operation, to
complete. In a multiprogramming system, the operating system simply switches to another
job and executes. When that job needs to wait, the CPU is switched to another job, and so
on.
• When the first job finishes waiting and it gets the CPU back.
• As long as at least one job needs to execute, the CPU is never idle.
• Multiprogramming operating systems use the mechanism of job scheduling and CPU
scheduling.
3. Time-Sharing/multitasking Operating Systems
Time sharing (or multitasking) OS is a logical extension of multiprogramming. It
provides extra facilities such as:
• Faster switching between multiple jobs to make processing faster.
• Allows multiple users to share computer system simultaneously.
• The users can interact with each job while it is running.
These systems use a concept of virtual memory for effective utilization of memory space. Hence,
in this OS, no jobs are discarded. Each one is executed using virtual memory concept. It uses
CPU scheduling, memory management, disc management and security management.
Examples: CTSS, MULTICS, CAL, UNIX etc.
4. Multiprocessor Operating Systems
Multiprocessor operating systems are also known as parallel OS or tightly coupled OS. Such
operating systems have more than one processor in close communication that sharing the
computer bus, the clock and sometimes memory and peripheral devices. It executes multiple
jobs at same time and makes the processing faster.
Multiprocessor systems have three main advantages:
• Increased throughput: By increasing the number of processors, the system performs more
work in less time. The speed-up ratio with N processors is less than N.
• Economy of scale: Multiprocessor systems can save more money than multiple single-
processor systems, because they can share peripherals, mass storage, and power supplies.
• Increased reliability: If one processor fails to done its task, then each of the remaining
processors must pick up a share of the work of the failed processor. The failure of one
processor will not halt the system, only slow it down.
The ability to continue providing service proportional to the level of surviving hardware
is called graceful degradation. Systems designed for graceful degradation are called
fault tolerant.
7
Operating System Jhulan Kumar
8
Operating System Jhulan Kumar
• The real time operating system can be classified into two categories:
1. hard real time system and 2. soft real time system.
• A hard real-time system guarantees that critical tasks be completed on time. This goal
requires that all delays in the system be bounded, from the retrieval of stored data to the
time that it takes the operating system to finish any request made of it. Such time
constraints dictate the facilities that are available in hard real-time systems.
• A soft real-time system is a less restrictive type of real-time system. Here, a critical real-
time task gets priority over other tasks and retains that priority until it completes. Soft real
time system can be mixed with other types of systems. Due to less restriction, they are
risky to use for industrial control and robotics.
Following are the five services provided by operating systems to the convenience of the
users.
1. Program Execution
The purpose of computer systems is to allow the user to execute programs. So the operating
system provides an environment where the user can conveniently run programs. Running a
program involves the allocating and deallocating memory, CPU scheduling in case of
multiprocessing.
2. I/O Operations
Each program requires an input and produces output. This involves the use of I/O. So the
operating systems are providing I/O makes it convenient for the users to run programs.
3. File System Manipulation
The output of a program may need to be written into new files or input taken from some files. The
operating system provides this service.
4. Communications
The processes need to communicate with each other to exchange information during execution. It
may be between processes running on the same computer or running on the different
computers. Communications can be occur in two ways: (i) shared memory or (ii) message
passing
5. Error Detection
An error is one part of the system may cause malfunctioning of the complete system. To avoid
such a situation operating system constantly monitors the system for detecting the errors. This
relieves the user of the worry of errors propagating to various part of the system and causing
9
Operating System Jhulan Kumar
malfunctioning.
Following are the three services provided by operating systems for ensuring the efficient operation
of the system itself.
1. Resource allocation
When multiple users are logged on the system or multiple jobs are running at the same time,
resources must be allocated to each of them. Many different types of resources are managed
by the operating system.
2. Accounting
The operating systems keep track of which users use how many and which kinds of computer
resources. This record keeping may be used for accounting (so that users can be billed) or
simply for accumulating usage statistics.
3. Protection
When several disjointed processes execute concurrently, it should not be possible for one process
to interfere with the others, or with the operating system itself. Protection involves ensuring
that all access to system resources is controlled. Security of the system from outsiders is also
important. Such security starts with each user having to authenticate him to the system,
usually by means of a password, to be allowed access to the resources.
System Call:
• System calls provide an interface between the process and the operating system.
• System calls allow user-level processes to request some services from the operating
system which process itself is not allowed to do.
• For example, for I/O a process involves a system call telling the operating system to read
or write particular area and this request is satisfied by the operating system.
Process control
• end, abort
• load, execute
• create process, terminate process
• get process attributes, set process attributes
• wait for time
• wait event, signal event allocate and free memory
• Program- It is a set of instructions, and by executing them step by step we can get any service
done, it is a passive entity and resides on the disk.
10
Operating System Jhulan Kumar
• Process- Instance of program or a program under execution. It is an active entity which resides
in the main memory.
Operations on Process: -
o Creating process
o Terminating a process
o User Mode
o Kernel Mode
- No preemption is allowed.
• Process Control Block- Process Control Block is defined as a data structure that contains all the
information related to a process. The process control block is also called task control block or entry
of the process table, etc.
11
Operating System Jhulan Kumar
• Attributes of PCB: -
o Process ID
o Program Counter status
o Process state
o Memory Management data
o CPU scheduling data
o CPU registers
o Address space for process
o Accounting data
o I/O information
o User’s Identification
o List of open files
o List of open devices
o Priority
• Threads- A thread may be defined as a flow of execution through some process code.
o A thread is also known as a lightweight process.
o Threads improve application’s performance by providing parallelism.
o Threads improve the performance of operating systems by reducing the process overhead.
o Thread is equivalent to a classical process and it is a software approach for software
improvement.
o Every thread belongs to only one process and not a single thread can exist outside a process.
12
Operating System Jhulan Kumar
• Types of Threads-
o There are two types of threads, which are:
- User Level Threads
User thread are implemented by users. kernel threads are implemented by OS.
OS doesn’t recognize user level threads. Kernel threads are recognized by OS.
User level threads are designed as dependent Kernel level threads are designed as
threads. independent threads.
Process Thread
Process switching needs interaction with the Thread switching doesn’t require any
OS.
interaction with the operating system.
13
Operating System Jhulan Kumar
Process Thread
If one process gets blocked, then another Whereas if one thread gets blocked and
process cannot run until the first process is waiting, a second thread of the same task
unblocked. can execute.
Multiple processes without threads needs more Multi-threaded processes use less
resources. resources.
In multiple processes each process operates Thread can read, write or alternate
independently of the others. another thread's data.
- Responsiveness
o Multithreading Models:
- Many-to-one model
- One-to-one model
- Many-to-model
• Schedulers in OS- Schedulers in OS are special system software. They help in scheduling the
processes in various ways.
Types of Schedulers:
o Long-term Scheduler
- A long-term scheduler is to maintain a good degree of multiprogramming.
o Short-term Scheduler
- A short-term scheduler is to increase the system performance.
o Medium-term Scheduler
- A medium-term scheduler is to perform swapping
• Dispatcher- The dispatcher is the module that gives control of the CPU to the process selected
by the scheduler. This function involves:
14
Operating System Jhulan Kumar
o Switching context.
o Switching to user mode.
o Jumping to the proper location in the newly loaded program.
The dispatcher needs to be as fast as possible, as it is run on every context switch. The time
consumed by the dispatcher is known as dispatch latency.
• Process Queues- The Operating system manages various types of queues for each of the process
states. The PCB related to the process is also stored in the queue of the same state. If the Process
is moved from one state to another state, then its PCB is also unlinked from the corresponding
queue and added to the other state queue in which the transition is made.
o Queues Maintained by Operating System:
- Job queue
- Ready queue
- Waiting queue
• Scheduling Criteria-
o CPU Utilization: CPU should be as busy as possible.
o Throughput: Number of processes completed per unit.
Throughput= N/L,
where N = number of processes,
- Non-preemptive in nature.
- No starvation.
- Simple
- In this if shorter jobs arrive later than long jobs, so shorter jobs have to wait for a long
time, and this phenomenon is known as Convoy Effect.
o Shortest Job First(SJF) or Shortest Job Next (SJN)-
- It executes jobs with smaller CPU burst first.
- Non-preemptive in nature.
- Gives less average waiting time than all other scheduling algorithms
15
Operating System Jhulan Kumar
- High Throughput.
- High throughput.
• Round Robin
o Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are assigned with
limits called time quantum.
o Always Pre-emptive in nature.
o Best response time.
o Ready queue is treated as a circular queue.
o Mostly used for a time-sharing system.
o With large time quantum, it acts as FIFO
o No priority.
o No starvation.
• Priority Scheduling
o Every process assigned with a number, and these numbers will be the priority of that process.
o Processes with lower priority may starve for CPU.
o No idea of response time
o Mostly used in Real-time operating systems.
o Can be preemptive and non-preemptive both in nature.
o A Response Ratio is calculated for each of the available jobs and the Job with the highest
response ratio is given priority over the others.
o Response Time= (W+S)/S Where, W → Waiting Time S → Service Time or Burst Time
o Most optimal scheduling algorithm.
o Non-preemptive in nature.
o No starvation.
Practice questions
Note: - Gantt chart of each question is given. Students need to prepare it and calculate
the response time, number of contexts switching, arrival time, turnaround time etc.
and verify it with the given solution for the exercise.
17
Operating System Jhulan Kumar
18
Operating System Jhulan Kumar
19
Operating System Jhulan Kumar
• Deadlock- A Deadlock is a situation where each of the computer processes waits for a resource
which is being assigned to some other process. In this situation, none of the process gets executed
since the resource it needs is held by some other process which is also waiting for some other
resource to be released.
Minimum two processes are required for deadlock to occur.
20
Operating System Jhulan Kumar
- No sharing
- No waiting
- Preempt resources
o A system is said to be safe if and only if the needs of all the processes could be satisfied with
the available resources in some order. The order is known as a safe sequence.
o There may exist multiple safe sequences.
o Single instance of resources using resource allocation graph.
o Banker’s Algorithm or Safety Algorithm is used to allocate multiple instances of resources to
the processes.
- When we activated the detection algorithm, CPU utilization has decreased or no CPU
utilization. Most of the processes are blocked.
21
Operating System Jhulan Kumar
- Abort one process at a time and check, repeat this process until deadlock is removed.
- Preempt some resources from processes and give these resources to other processes
until the deadlock cycle is broken.
• Deadlock Ignorance: This involves completely ignoring the concept of deadlock, according to
this no deadlock exists. It uses the Ostrich approach.
• Inter-process Communication-
o Independent processes- Two or more processes will execute autonomously without affecting
the output or order of execution of other processes.
- No states will be shared with other processes
o Co-operating processes- Two or more processes are said to be co-operative if and only if they
get affected by or affect the execution of other processes.
- These processes require an IPC mechanism that will allow them to exchange information
and data like messages, shared memory, shared variables, etc.
- Shared resources, speed-up and modularity are the advantages of co-operating
processes.
• IPC Models- There are primarily two IPC models:
o Shared memory
o Message Passing
• Concurrency
o Able to run multiple applications at the same time.
o Allows better resource utilization
o Better performance
o Better average response time
o When two or more processes execute concurrently while sharing some system resources. So,
all these processes execute properly, we need proper synchronization between their actions.
o Process synchronization is to avoid the inconsistent output.
• Critical Section- Critical section is a small piece of code or a section in the program from where a
process accesses the shared resources concurrently during its execution.
• Race Condition- The final output produced completely depends upon the execution order of
instructions of processes.
22
Operating System Jhulan Kumar
o Signal operation
- Also known as UP, or V operation.
- It is used in the exit part of the critical section to release the acquired lock.
• Types of semaphore-
o Binary semaphore- Uses limited number, only 0 and 1.
- It is busy waiting, known as the spin lock issue.
- Its positive value indicates the number of instances of resource R currently available for
allocation, and there is no request in the waiting queue.
- While using multiple semaphores, following proper ordering is highly important.
o Strong semaphore- It is that semaphore whose definition includes policy of FIFO queue.
o Weak semaphore- A semaphore that doesn’t specify the order in which processes are removed
from the queue.
23
Operating System Jhulan Kumar
• Synchronization Mechanism
1. Lock variable
- Software mechanism implemented in user mode.
- Deadlock free.
3. Turn variable
- It uses a variable called turn to provide the synchronization among processes.
- Ensures mutual exclusion.
- Ensures bounded waiting because processes execute one by one and there is a
guarantee that each process will get a chance to execute.
- Starvation doesn’t exist.
- It is neutral architecturally because it doesn’t need any support of the operating system.
- Deadlock free.
24
Operating System Jhulan Kumar
4. Peterson Solution
- It is a synchronization mechanism implemented in user mode, it uses two variables:
turn and interest.
- Satisfy mutual exclusion.
- Satisfy progress.
****
25