0% found this document useful (0 votes)
2 views

OS Unit I

The document provides an overview of operating systems, their types, structures, and components, including process management, memory management, and I/O device management. It covers various operating system types such as batch, multiprogramming, time-sharing, distributed, and real-time systems, along with their advantages and disadvantages. Additionally, it discusses system calls, process states, and the role of the operating system as an interface between users and hardware.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

OS Unit I

The document provides an overview of operating systems, their types, structures, and components, including process management, memory management, and I/O device management. It covers various operating system types such as batch, multiprogramming, time-sharing, distributed, and real-time systems, along with their advantages and disadvantages. Additionally, it discusses system calls, process states, and the role of the operating system as an interface between users and hardware.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 100

Unit I

Operating System Introduction


Topics
• Operating System - Introduction
• Structures - Simple Batch, Multiprogrammed
• Time-shared, Personal Computer, Parallel
• Distributed Systems, Real-Time Systems
• System components
• Operating System services,
• System Calls Process - Process concepts and scheduling
• Operations on processes,
• Cooperating Processes, Threads.
• Process related system calls – fork, exit, weight and exec.
Operating System
• OS acts as an interface between a user of a
computer and the computer hardware
• OS is a software which manages Hardware
• Operating system goals:
– Make the computer system convenient to use
– Use the computer hardware in an efficient manner
Computer System Structure
• Computer system can be divided into four components:
– Hardware – provides basic computing resources
• CPU, memory, I/O devices
– Operating system
• Controls and coordinates use of hardware among various
applications and users
– Application programs – define the ways in which the system
resources are used to solve the computing problems of the users
• Word processors, compilers, web browsers, database
systems, video games
– Users
• People, machines, other computers
Abstract View of Components of Computer
Operating system role
• User View:
Interface

• System view:
Resource allocator/Manager
Computer system Architecture
• Single processor systems
• Multiprocessor system
(parallel/tightely coupled systems)
Types of Multiprocessor Systems:
(i). Asymmetric multiprocessing system
(ii). Symmetric multiprocessing system
Advantages of multiprocessor system
• Increased throughput
• Economy of scale
• Increased reliability
TYPES OF OPERATING SYSTEM

• Different operating systems serve


different purposes and have different
applications.
• Some OS are suited for everyday tasks
and functions, such as using a smart phone or
personal laptop, whereas others are required
for more specialised work or tasks, such as
gaming.
TYPES OF OPERATING SYSTEM
1. Simple batch operating system
2. Multi-programming operating system
3. Time-sharing operating system
4. Distributed operating system
5 Real-time operating system
6. Multi-processing operating system
Simple batch os
In batch operating system
• Firstly, user prepares his job using punch cards.
• Then, he submits the job to the computer
operator.
• Operator collects the jobs from different users
and sort the jobs into batches with similar needs.
• Then, operator submits the batches to the
processor one by one.
• All the jobs of one batch are executed together.
Advantages:
– Reduces idle time as jobs are processed
sequentially with minimal gaps.
– Jobs are executed automatically, reducing manual
intervention.
Disadvantages:
– Users cannot interact with the job once it starts,
making debugging and monitoring difficult.
– If a job in the batch fails, subsequent jobs might not
execute as planned.
Applications:
• Payroll systems
• Bank transaction processing
• Large-scale data entry and data analysis
Multi-Programming Operating System

Multiprogramming Operating Systems can be simply illustrated as more than one


program is present in the main memory and any one of them can be kept in
execution. This is basically used for better utilization of resources.
Multiprogramming OS
• In Multiprogramming CPU is kept busy at all times.
• CPU time and IO time are two forms of system time
required by each process.
• When a process waiting for I/O in a multiprogramming
environment, the CPU can begin the execution of other
processes. As a result, multiprogramming helps in
improving the system’s efficiency
Context Switching:
• The operating system switches between processes to
manage their execution efficiently.
Multi programming
Advantages:
• Increased CPU Utilization:
– Keeps the CPU busy by executing another process when one is waiting
for I/O.
• Higher Throughput:
– Processes more jobs in a given time.
• Improved Responsiveness:
– Reduces idle time for both the CPU and other resources .
Disadvantages:
• Complexity:
– Managing multiple processes requires sophisticated scheduling and
memory management algorithms.
• Potential for Deadlocks:
– Improper resource allocation may lead to deadlocks.
• Overhead:
– Context switching introduces processing overhead.
Examples of Multiprogramming Operating
Systems:
• Unix
• Linux
• Windows (in multiprogramming mode)
Multitaskig/Timesharing system
• Multiprogramming system provides an environment in which
various system resources are utilized effectively but hey do
not provide for user interaction with computer system.
• Timesharing is a logical extension of multiprogramming. here
cpu executes multiple jobs by switching among them ,but the
switches occur so frequently hat the user can interact with
each program while it is running.
Multitaskig/Timesharing system
• Time Slices (Quanta):
The CPU time is divided into small units, called time
slices or quanta, which are allocated to each process in
a round-robin or other scheduling fashion.
• Interactive User Support:
Designed for multi-user environments where users can
interact with the system in real time.
• Preemptive Scheduling:
If a process exceeds its time slice, it is paused, and the
next process is executed, ensuring fairness.
Time-Sharing Operating Systems
Real-Life Application:
Bank ATMs: Multiple users interact with the
system simultaneously.
Online Servers: Web servers manage multiple
requests from different users concurrently
Types of Multitasking OS

Preemptive Multitasking
• Definition: In preemptive multitasking, the operating
system has full control over the CPU and can preempt or
interrupt tasks to allocate CPU time to another task. This
ensures fair CPU allocation and better responsiveness.
• How It Works:
– The OS uses a scheduling algorithm (e.g., round-robin, priority
scheduling).
– Tasks are assigned fixed time slices (or quanta).
– If a task exceeds its time slice, the OS forcibly pauses it and
switches to another task.
Distributed OS
Distributed OS is designed for networked
computing environments where multiple
computers are interconnected.
These systems manage resources across
multiple machines, allowing users to access
files and services from different locations.
Distributed OS
• The Distributed OS is separated into sections
and loaded on different machines rather than
placed on a single machine.
• Each machine has a piece of the distributed
OS installed to allow them to communicate.
Distributed Operating System

A Distributed Operating System (DOS) is


software that manages a group of
independent computers and makes them
appear to the users as a single system.
This type of operating system coordinates
multiple computers in a network to work
together and share resources, such as
hardware, software, and data.
Applications of Distributed Operating Systems

• Cloud Computing: Managing virtual machines and


storage across data centers.
• IoT Systems: Coordinating devices in a network.
• Supercomputers: Enabling parallel processing across
many nodes.
• Content Delivery Networks (CDNs): Distributing
content globally with high efficiency.
Real-time operating system (RTOS)
• A real-time operating system (RTOS) is a special kind
of operating system designed to handle tasks that
need to be completed quickly and on time.
• Unlike general-purpose operating systems (GPOS),
which are good at multitasking and user interaction
RTOS focuses on doing things in real time.
• The first RTOS was created by Cambridge University
in the 1960s.
Real-Time Operating System (RTOS)
• In RTOS each job has a deadline by which it must be
completed. otherwise, there will be a significant
loss, or even if the output is provided, it will be
utterly useless.
• For example, in military applications, if you wish to
drop a missile, the missile must be dropped with a
specific degree of precision.
• These systems are commonly used in embedded
systems, robotics, automotive control systems, and
industrial automation.
.
Types of RTOS
• Hard Real time
• Soft Real time
Types of RTOS
• Hard Real-Time Systems:
– Missing a deadline is unacceptable and can lead to
failure.
– Examples: Pacemakers, industrial automation,
flight control systems.
• Soft Real-Time Systems:
– Missing a deadline degrades system performance
but doesn't cause failure.
– Examples: Multimedia systems, online gaming.
Operating system operations
In order to ensure the proper execution of the operating system, we
must be able to distinguish between the execution of operating-
system code and user defined code.

the computer system is executing on behalf of a user application,


the system is in user mode.
However, when a user application requests a service from the
operating system (via a.. system call), it must transition from user to
kernel mode to fulfill the request.
A system call
• A system call is a mechanism used by
programs to request services from the
operating system (OS).
• A system call is a way for programs to interact
with the operating system.
Dual mode Operation
Example
Personal-Computer Systems(PCs)
• A personal computer (PC) is a small, relatively
inexpensive computer designed for an individual user.
• Low price
• All are based on the microprocessor technology that
enables manufacturers to put an entire CPU on one chip.
• At home, the most popular use for personal computers
is for playing games.
• Businesses use personal computers for word processing,
accounting, desktop publishing, and for running
spreadsheet and database management applications
Parallel system
The system consistng of more than one
processor and tightly coupled then it is called
“parallel system”
1.Symmetric
2.Asymmetric
System components

• Process Management
• I/O Device Management
• File Management
• Network Management
• Main Memory Management
• Secondary Storage Management
• Security Management
Process Management

• Create, load, execute, suspend, resume, and terminate processes.


• Switch system among multiple processes in main memory.
• Provides communication mechanisms so that processes can communicate
with each others
• Provides synchronization mechanisms to control concurrent access to
shared data to keep shared data consistent.
• Allocate/de-allocate resources properly to prevent or avoid deadlock
situation.
I/O Device Management
• Following are the tasks of I/O Device Management
component:
• Hide the details of H/W devices
• Manage main memory for the devices using cache, buffer,
and spooling
• Maintain and provide custom drivers for each device.
File Management

A file is defined as a set of correlated information and it is defined by the creator


of the file. Data files can be of any type like alphabetic, numeric, and
alphanumeric.
The operating system implements the abstract concept of the file by managing
mass storage device, such as types and disks. Also files are normally
organized into directories to ease their use.
These directories may contain files and other directories and so on.
The operating system is responsible for the following activities in connection
with file management:
• File creation and deletion
• Directory creation and deletion
• The support of primitives for manipulating files and directories
• Mapping files onto secondary storage
• File backup on stable (nonvolatile) storage media
Network Management
Network management is the process of managing and administering a
computer network.
A computer network is a collection of various types of computers connected
with each other.
Network management is the process of keeping your network healthy for an
efficient communication between different computers.
Following are the features of network management:
• Network administration
• Network maintenance
• Network operation
• Network provisioning
• Network security
Main Memory Management
Memory is a large array of words or bytes, each with its own address.
It is a repository of quickly accessible data shared by the CPU and I/O
devices.
Main memory is a volatile storage device which means it loses its
contents in the case of system failure or as soon as system power goes
down.
The main motivation behind Memory Management is to maximize
memory utilization on the computer system.
The operating system is responsible for the following activities in
connections with memory management:
• Keep track of which parts of memory are currently being used and by
whom.
• Decide which processes to load when memory space becomes
available.
• Allocate and deallocate memory space as needed.
Secondary Storage Management

The main purpose of a computer system is to execute


programs.
These programs, together with the data they access, must be in
main memory during execution. Since the main memory is too
small to permanently accommodate all data and program, the
computer system must provide secondary storage to backup
main memory.
The operating system is responsible for the following activities in
connection with disk management:
• Free space management
• Storage allocation
• Disk scheduling
Security Management

• The operating system is primarily responsible for all task and activities
happen in the computer system. The various processes in an operating
system must be protected from each other’s activities.
• Security Management refers to a mechanism for controlling the access of
programs, processes, or users to the resources defined by a computer
controls to be imposed, together with some means of enforcement.
Operating System services
User services
1.Program execution
2.User interface
CLI-textbased commands-DOS Os
GUI-widgets,folders,visibility-Windows,linus,Macos
3.I/O operations
Keyboard,mouse,minitor have controllers
These controllers are managed by OS
4.Communication
IPC
5.File system manipulation
create,store,delete,retieve,search the files
6.Error detection
program,hardware-will be reported to the user or debug by itself(OS)
Operating System services
System services
Resource allocation
Multiple jobs,Multiple users
Accounting
keeptrack of which user use how much and what kind
of resources
Protection and security
Acess conrtol and Authentication
Process in a Main memory

When a program is loaded into memory, it may be divided into the four
components stack, heap, text, and data to form a process. The simplified
depiction of a process in the main memory is shown in the diagram below.

1.Stack
The process Stack contains the temporary
data such as method/function parameters,
return address and local variables.
2.Heap
This is dynamically allocated memory to a
process during its run time.
3.Text
It holds Machine code or instructions
executed by the CPU.
4.Data
This section contains the global and static
variables.
Memory Layout of a C Program
Process States
When a process executed, it changes the state, generally
the state of process is determined by the current activity of
the process.
Each process may be in one of the following states:

1. New : The process is being created.


2. Running : The process is being executed.
3. Waiting : The process is waiting for some event to occur.
4. Ready : The process is waiting to be assigned to a processor.
5. Terminated : The Process has finishedexecution
• Only one process can be running in any processor at any time, But many
process may be in ready and waiting states. The ready processes are
loaded into a “ready queue”
Process Control Block (PCB)

• A Process Control Block is a data structure maintained by the


Operating System for every process. The PCB is identified by
an integer process ID (PID).
• A PCB keeps all the information needed to keep track of a
process as listed below in the table
The architecture of a PCB is completely dependent on Operating System and may
contain different information in different operating systems. Here is a simplified
diagram of a PCB −
Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Process ID
Unique identification for each of the process in the operating system.
Pointer
A pointer to parent process.
Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this
process.
CPU registers
Various CPU registers where process need to be stored for execution for running state.
CPU Scheduling Information
Process priority and other scheduling information which is required to schedule the process.
Memory management information
This includes the information of page table, memory limits, Segment table depending on
memory used by the operating system.
Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.
IO status information
This includes a list of I/O devices allocated to the process.
Context Switch
• When CPU switches to another process, the system must save
the state of the old process and load the saved state for the
new process via a context switch
• Context of a process represented in the PCB
• Context-switch time is pure overhead;
• The system does no useful work while switching
• Time dependent on hardware support
– Some hardware provides multiple sets of registers per CPU
 multiple contexts loaded at once
CPU Switch From Process to Process
A context switch occurs when the CPU switches from
one process to another.
Thread
A process is divide into number of light weight process, each light weight process
is said to be a Thread.
The Thread has a program counter (Keeps track of which instruction to execute
next), registers (holds its current working variables), stack (execution History).

Thread States:
1. Born State : A thread is just created.
2. ready state : The thread is waiting for CPU.
3. running : System assigns the processor to the thread.
4. sleep : A sleeping thread becomes ready after the designated sleep
time expires.
5. dead : The Execution of the thread finished.

Eg: Word processor. Typing, Formatting, Spell check, saving are threads.
Multithreading
• A process is divided into number of smaller tasks each task is called a Thread.
Number of Threads with in a Process execute at a time is called Multithreading.
• If a program, is multithreaded, even when some portion of it is blocked, the whole
program is not blocked.
• The rest of the program continues working If multiple CPU’s are available
Types of threads
User Threads : Thread creation, scheduling, management happen in user space by
Thread Library. user threads are faster to create and manage.
If one thread in a process blocked remaining threads in same process also gets
blocked

Kernel Threads: kernel creates, schedules, manages these threads .


If one thread in a process blocked, over all process need not be blocked.
Process scheduling
Process scheduling is the activity of the process manager that
handles the removal of the running process from the CPU and
the selection of another process based on a particular
strategy.
Scheduling Queues
There are the following queues maintained by the Operating system.
1. Job Queue
As the process enter the system they are put into job queue.
It is maintained in the secondary memory.
The long term scheduler (Job scheduler) picks some of the jobs and put them in the primary
memory.
2. Ready Queue
The processes that are residing in main memory and are ready and waiting to execute are kept
in Ready queue .
It is maintained in primary memory.
The short term scheduler picks the job from the ready queue and dispatch to the CPU for the
execution.
3. Waiting Queue
When the process needs some IO operation in order to complete its execution, OS changes the
state of the process from running to waiting.
The context (PCB) associated with the process gets stored on the waiting queue which will be
used by the Processor when the process finishes the IO.
A common representation of process scheduling is a Queueing diagram
• Rectangle box represents queue
• Circle represents resource
• Arrow represents flow of process in the quue
Scheduler
A scheduler is essentially a software system
Process Schedulers are fundamental components of operating
systems responsible for deciding the order in which processes
are executed by the CPU.
Types
• Long Term Scheduler
• Short Term Scheduler
• Medium Term Scheduler
Long Term Scheduler
• The job scheduler
• It selects processes from the pool (secondary memory)
and arranges them in the ready queue that is present in
the main memory.
• Long Term Scheduler is responsible for controlling the
degree of Multiprogramming.
• It must select the mix of CPU-bound and I/O-bound jobs
for better utilization of CPU
Short Term Scheduler
• CPU scheduler.
• It chooses one of the Jobs from the ready
queue and sends it to the CPU for processing.
• To determine which job from the ready queue
will be dispatched for execution,
a scheduling algorithm is used.
Dispatcher
Dispatcher is a special type of program whose work
starts after the scheduler. When the scheduler completes
its task of selecting a process, it is the dispatcher that
moves the process to the queue.
• A dispatcher does the following:
• Context-switch.
• Then switching to user mode.
• Jumping to the proper location in the newly loaded
program.
Medium Term Scheduler
Medium Term Scheduler (MTS) is responsible for moving a
process from memory to disk (or swapping).
A running process may become suspended if it makes an I/O
request. A suspended processes cannot make any progress
towards completion. In this condition, to remove the process
from memory and make space for other processes, the
suspended process is moved to the secondary storage.
This process is called swapping, and the process is said to be
swapped out or rolled out. Swapping may be necessary to
improve the process mix.
Scheduling
Operations on Processes

• System must provide mechanisms for:


– Process creation
– Process termination
Process Creation
• Parent process create children processes,
which, in turn create other processes, forming
a tree of processes
• Generally, process identified and managed via
a process identifier (pid)
• Resource sharing options
– Parent and children share all resources
– Children share subset of parent’s resources
– Parent and child share no resources
• Execution options
– Parent and children execute concurrently
– Parent waits until children terminate
Process Creation (Cont.)
• Address space
– Child duplicate of parent
– Child has a program loaded into it
• UNIX examples
– fork() system call creates new process
– exec() system call used after a fork() to replace the process’
memory space with a new program
– Parent process calls wait()waiting for the child to terminate
C Program Forking Separate Process
Process Termination
 Process executes last statement and then asks the operating system to
delete it using the exit() system call.
• Returns status data from child to parent (via wait())
• Process’ resources are deallocated by operating system
 Parent may terminate the execution of children processes using the
abort() system call. Some reasons for doing so:
• Child has exceeded allocated resources
• Task assigned to child is no longer required
• The parent is exiting, and the operating systems does not allow a child
to continue if its parent terminates
Process Termination
 Some operating systems do not allow child to exists if its parent has
terminated. If a process terminates, then all its children must also be
terminated.
• cascading termination. All children, grandchildren, etc., are
terminated.
• The termination is initiated by the operating system.
 The parent process may wait for termination of a child process by using the
wait()system call. The call returns status information and the pid of the
terminated process
pid = wait(&status);
 If no parent waiting (did not invoke wait()) process is a zombie
 If parent terminated without invoking wait(), process is an orphan
Cooperating Processes /Interprocess
communication (IPC)
 Processes within a system may be independent or cooperating
 Cooperating process can affect or be affected by other processes.
 Reasons for cooperating processes:
• Information sharing
• Computation speedup
• Modularity
• Convenience
 Cooperating processes need interprocess communication (IPC)
 Two models of IPC
• Shared memory
• Message passing
IPC
IPC – Shared Memory

 An area of memory shared among the processes that wish to


communicate
 The communication is under the control of the users processes
not the operating system.
 Major issues is to provide mechanism that will allow the user
processes to synchronize their actions when they access shared
memory.

Operating System Concepts – 10th Edition 3.82 Silberschatz, Galvin and Gagne ©2018
IPC – Message Passing

 Processes communicate with each other without


resorting to shared variables
 IPC facility provides two operations:
• send(message)
• receive(message)
 The message size is either fixed or variable

Operating System Concepts – 10th Edition 3.83 Silberschatz, Galvin and Gagne ©2018
Direct Communication
 Processes must name each other explicitly:
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
 Properties of communication link
• Links are established automatically
• A link is associated with exactly one pair of communicating
processes
• Between each pair there exists exactly one link
• The link may be unidirectional, but is usually bi-directional

Operating System Concepts – 10th Edition 3.84 Silberschatz, Galvin and Gagne ©2018
Indirect Communication
 Messages are directed and received from mailboxes (also referred
to as ports)
• Each mailbox has a unique id
• Processes can communicate only if they share a mailbox
 Properties of communication link
• Link established only if processes share a common mailbox
• A link may be associated with many processes
• Each pair of processes may share several communication links
• Link may be unidirectional or bi-directional

Operating System Concepts – 10th Edition 3.85 Silberschatz, Galvin and Gagne ©2018
Indirect Communication (Cont.)
 Operations
• Create a new mailbox (port)
• Send and receive messages through mailbox
• Delete a mailbox
 Primitives are defined as:
• send(A, message) – send a message to mailbox A
• receive(A, message) – receive a message from mailbox A

Operating System Concepts – 10th Edition 3.86 Silberschatz, Galvin and Gagne ©2018
System calls for Process management

• fork − For creating a duplicate process from


the parent process.
• wait − Processes are supposed to wait for
other processes to complete their work.
• exec − This function is used to replace the
program executed by a process.
• exit − Terminates the process.
The pictorial representation of process management system calls is as follows
Ex :2
wait() system call
The wait() system call suspends the execution of calling
process i.e parent process and wait until one of its child
processes finishes execution.

This helps the parent process to retrieve the termination


status of the child process and avoid creating a zombie
process (a process that has completed execution but still has
an entry in the process table).
Syntax of wait()
#include <sys/wait.h>
pid_t wait(int *status);
Return Value:On success, wait() returns the process ID (PID) of
the terminated child process.
On failure, it returns -1.
Parameters:
status: A pointer to an integer where the termination status of
the child process is stored.
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
int main() {
pid_t pid;
pid = fork(); // Create a new child process
if (pid < 0)
{
perror("fork failed");
exit(1);
}
else if (pid == 0) { Parent process: Waiting for child to
printf("Child process: PID = %d\n", getpid()); complete.
sleep(2); Child process: PID = 12345
printf("Child process exiting.\n"); Child process exiting.
exit(); Parent process: Child with PID = 12345
} finished.
else {
printf("Parent process: Waiting for child to complete.\n");
int status;
pid_t child_pid = wait(&status); // Wait for the child process to finish
printf("Parent process: Child with PID = %d finished.\n", child_pid);
}

return 0;
}
exit() system call
The exit system call in Unix/Linux is used to terminate a process.
When a process calls exit, it releases resources allocated to it and sends an
exit status code to its parent process.
Example
#include <stdio.h>
#include <stdlib.h> // Required for the exit() function

int main()
{
printf("Program started.\n");
int error_occurred = 1;
if (error_occurred)
{
printf("Error occurred! Exiting with status 1.\n");
exit(1); // Exit with a non-zero status indicating an error
}

printf("This line won't execute if exit is called.\n");

return 0; // Return 0 for successful execution


}

You might also like