Process Management
Process Management
MANAGEMENT
Process control block, Context switching, Process scheduling,
Interprocess communication, Threads and Multithreading
PROCESS CONTROL BLOCK(PCB)
A Process Control Block (PCB), also known as a Task Control Block or
Task Struct, is a data structure used by computer operating systems to
manage information about a process or task.
CPU Registers: The content of the processor's registers, including data and
address registers.
I/O Status Information: The status of open files, I/O devices, and other resources
the process may be using.
Process State: The current state of the process, such as running, ready, waiting,
etc.
When a context switch occurs (i.e., the operating system switches
from running one process to another), the contents of the CPU
registers and other relevant information are saved in the PCB of the
currently running process, and the PCB of the next process to be
executed is loaded.
Resume Execution:
The CPU begins executing the instructions of the newly loaded process.
• Context switching is a fundamental mechanism for achieving
multitasking and time-sharing in modern operating systems. It allows
the illusion of simultaneous execution of multiple processes even
though only one process is actively executing on the CPU at any given
moment. Efficient context switching is crucial for system performance,
and the overhead associated with context switching should be
minimized to ensure responsiveness and effective utilization of system
resources.
PROCESS SCHEDULING
Process scheduling
Process scheduling is a key component of operating systems that
manage multiple processes concurrently.
It involves selecting the next process to execute from the pool of
ready processes.
The primary goal of process scheduling is to use the CPU efficiently,
ensuring fair access to resources and providing responsive services to
users.
Here are the main concepts and
algorithms related to process
scheduling:
Scheduler Types:
Long-Term Scheduler (Job Scheduler): Selects processes from the job pool (e.g.,
programs on disk) and loads them into main memory for execution.
Short-Term Scheduler (CPU Scheduler): Selects processes from the ready queue
in main memory and assigns the CPU to them.
Scheduling Queues:
Job Queue: Contains all processes in the system, including those waiting to be
brought into main memory.
Ready Queue: Holds processes that are ready to execute but are waiting for the
CPU.
Blocked Queue: Contains processes that cannot proceed until a specific event
occurs (e.g., I/O completion).
Scheduling Criteria:
CPU Utilization: Maximizing CPU usage to keep the processor busy.
Waiting Time: The total time a process spends waiting in the ready queue.
Response Time: The time it takes for a system to respond to a user's input.
Scheduling Algorithms:
First-Come, First-Served (FCFS): Processes are executed in the order they arrive.
Shortest Job Next (SJN) or Shortest Job First (SJF): The process with the smallest
total remaining processing time is chosen next.
Priority Scheduling: Processes are assigned priorities, and the one with the
highest priority is selected next.
Round Robin (RR): Each process is assigned a fixed time slice (quantum) to
execute before moving to the next process.
Multilevel Queue Scheduling: Processes are divided into different priority levels,
each with its own queue. Processes move between queues based on their
behavior.
Multilevel Feedback Queue Scheduling: Similar to multilevel queues, but
processes can move between queues based on their recent behavior.
Preemption:
Preemptive Scheduling: The operating system can forcefully interrupt a
currently running process to start or resume another.
Priority Inversion:
A situation where a lower-priority process holds a resource needed by a
higher-priority process, causing delays.
Several methods of IPC exist, and the choice depends on the specific
requirements of the tasks at hand
Here are some common IPC
mechanisms:
Pipes:
Named Pipes (FIFO): A named pipe is a special type of file that allows processes
to communicate by reading from and writing to the same file. It provides a
simple way for processes to exchange data.
Message Passing:
Direct or Indirect Message Passing: In direct message passing, processes
communicate directly with each other. In indirect message passing, a message is
sent to a message queue, and other processes can read from that queue.
Shared Memory:
Memory Mapping: Processes can share a portion of their memory space,
allowing them to read and write to the shared area. Changes made by one
process are immediately visible to other processes.
Sockets:
Network Sockets: Processes on different machines can communicate over a
network using sockets. Sockets provide a communication endpoint for
processes to send and receive data.
Semaphores:
Binary Semaphores: Used for simple synchronization, allowing or denying
access to a resource.
Counting Semaphores: Allow multiple processes to access a resource
concurrently, up to a specified limit.
Message Queues:
Processes can send messages to a queue, and other processes can read
messages from the queue. This provides a way for processes to communicate
asynchronously.
Threads within the same process share the same resources, such as
memory space and file descriptors, while having their own registers
and program counters.
Resource Sharing: Threads within the same process can easily share
data, leading to efficient communication.
Multithreading:
Multithreading is a programming and execution model that allows
multiple threads to exist within the context of a single process.