Open In App

Introduction of Process Management

Last Updated : 21 Apr, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Process Management for a single tasking or batch processing system is easy as only one process is active at a time. With multiple processes (multiprogramming or multitasking) being active, the process management becomes complex as a CPU needs to be efficiently utilized by multiple processes. Multiple active processes can may share resources like memory and may communicate with each other. This further makes things complex as an Operating System has to do process synchronization.

Please remember the main advantages of having multiprogramming are system responsiveness and better CPU utilization. We can run multiple processes in interleaved manner on a single CPU. For example, when the current process is getting busy with IO, we assign CPU to some other process.

CPU-Bound vs I/O-Bound Processes

A CPU-bound process requires more CPU time or spends more time in the running state. An I/O-bound process requires more I/O time and less CPU time. An I/O-bound process spends more time in the waiting state. 

Process planning is an integral part of the process management operating system. It refers to the mechanism used by the operating system to determine which process to run next. The goal of process scheduling is to improve overall system performance by maximizing CPU utilization, minimizing throughput time, and improving system response time. 

Process Management Tasks

Process management is a key part in operating systems with multi-programming or multitasking.

  • Process Creation and Termination : Process creation involves creating a Process ID, setting up Process Control Block, etc. A process can be terminated either by the operating system or by the parent process. Process termination involves clearing all resources allocated to it.
  • CPU Scheduling : In a multiprogramming system, multiple processes need to get the CPU. It is the job of Operating System to ensure smooth and efficient execution of multiple processes.
  • Deadlock Handling : Making sure that the system does not reach a state where two or more processes cannot proceed due to cyclic dependency on each other.
  • Inter-Process Communication : Operating System provides facilities such as shared memory and message passing for cooperating processes to communicate.
  • Process Synchronization : Process Synchronization is the coordination of execution of multiple processes in a multiprogramming system to ensure that they access shared resources (like memory) in a controlled and predictable manner.

Process Operations

Please remember a process goes through different states before termination and these state changes require different operations on processes by an operating system. These operations include process creation, process scheduling, execution and killing the process. Here are the key process operations:

Process Operations
Process Operations

Process Creation

Process creation in an operating system (OS) is the act of generating a new process. This new process is an instance of a program that can execute independently.

Scheduling

Once a process is ready to run, it enters the "ready queue." The scheduler's job is to pick a process from this queue and start its execution.

Execution

Execution means the CPU starts working on the process. During this time, the process might:

  • Move to a waiting queue if it needs to perform an I/O operation.
  • Get blocked if a higher-priority process needs the CPU.

Killing the Process

After the process finishes its tasks, the operating system ends it and removes its Process Control Block (PCB).

Context Switching of Process

The process of saving the context of one process and loading the context of another process is known as Context Switching. In simple terms, it is like loading and unloading the process from the running state to the ready state. 

When Does Context Switching Happen? 

Context Switching Happen:

  • When a high-priority process comes to a ready state (i.e. with higher priority than the running process). 
  • An Interrupt occurs.
  • User and kernel-mode switch (It is not necessary though) 
  • Preemptive CPU scheduling is used. 

Context Switch vs Mode Switch

A mode switch occurs when the CPU privilege level is changed, for example when a system call is made or a fault occurs. The kernel works in more a privileged mode than a standard user task. If a user process wants to access things that are only accessible to the kernel, a mode switch must occur. The currently executing process need not be changed during a mode switch. A mode switch typically occurs for a process context switch to occur. Only the kernel can cause a context switch. 

Process Scheduling Algorithms

The operating system can use different scheduling algorithms to schedule processes. Here are some  commonly used timing algorithms: 

  • First-Come, First-Served (FCFS): This is the simplest scheduling algorithm, where the process is executed on a first-come, first-served basis. FCFS is non-preemptive, which means that once a process starts executing, it continues until it is finished or waiting for I/O. 
  • Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects the process with the shortest burst time. The burst time is the time a process takes to complete its execution. SJF minimizes the average waiting time of processes.  
  • Round Robin (RR): Round Robin is a proactive scheduling algorithm that reserves a fixed amount of time in a round for each process. If a process does not complete its execution within the specified time, it is blocked and added to the end of the queue. RR ensures fair distribution of CPU time to all processes and avoids starvation.
  • Priority Scheduling: This scheduling algorithm assigns priority to each process and the process with the highest priority is executed first. Priority can be set based on process type, importance, or resource requirements.  
  • Multilevel Queue: This scheduling algorithm divides the ready queue into several separate queues,  each queue having a different priority. Processes are queued based on their priority, and each queue uses its own scheduling algorithm. This scheduling algorithm is useful in scenarios where different types of processes have different priorities.

Advantages of Process Management

  • Running Multiple Programs: Process management lets you run multiple applications at the same time, for example, listen to music while browsing the web.
  • Process Isolation: It ensures that different programs don't interfere with each other, so a problem in one program won't crash another.
  • Fair Resource Use: It makes sure resources like CPU time and memory are shared fairly among programs, so even lower-priority programs get a chance to run.
  • Smooth Switching: It efficiently handles switching between programs, saving and loading their states quickly to keep the system responsive and minimize delays.

Disadvantages of Process Management

  • Overhead: Process management uses system resources because the OS needs to keep track of various data structures and scheduling queues. This requires CPU time and memory, which can affect the system's performance.
  • Complexity: Designing and maintaining an OS is complicated due to the need for complex scheduling algorithms and resource allocation methods.
  • Deadlocks: To keep processes running smoothly together, the OS uses mechanisms like semaphores and mutex locks. However, these can lead to deadlocks, where processes get stuck waiting for each other indefinitely.
  • Increased Context Switching: In multitasking systems, the OS frequently switches between processes. Storing and loading the state of each process (context switching) takes time and computing power, which can slow down the system.

GATE-CS-Questions on Process Management

Q.1: Which of the following need not necessarily be saved on a context switch between processes? (GATE-CS-2000) 

(A) General purpose registers 

(B) Translation lookaside buffer 

(C) Program counter 

(D) All of the above 

Answer: (B)

In a process context switch, the state of the first process must be saved somehow, so that when the scheduler gets back to the execution of the first process, it can restore this state and continue. The state of the process includes all the registers that the process may be using, especially the program counter, plus any other operating system-specific data that may be necessary. A translation look-aside buffer (TLB) is a CPU cache that memory management hardware uses to improve virtual address translation speed. A TLB has a fixed number of slots that contain page table entries, which map virtual addresses to physical addresses. On a context switch, some TLB entries can become invalid, since the virtual-to-physical mapping is different. The simplest strategy to deal with this is to completely flush the TLB. 

Q.2: The time taken to switch between user and kernel modes of execution is t1 while the time taken to switch between two processes is t2. Which of the following is TRUE? (GATE-CS-2011) 

(A) t1 > t2 

(B) t1 = t2 

(C) t1 < t2 

(D) nothing can be said about the relation between t1 and t2. 

Answer: (C)

Process switching involves a mode switch. Context switching can occur only in kernel mode. 


Next Article

Similar Reads