The document discusses process and thread management in operating systems, explaining the concepts of processes, threads, inter-process communication, and process scheduling. It details the structure and states of processes, the creation and termination of processes, and the characteristics of threads, highlighting their shared resources and differences from processes. Additionally, it covers the significance of the Process Control Block (PCB) and the advantages of using threads for efficient resource management.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0 ratings0% found this document useful (0 votes)
4 views
Chapter_2
The document discusses process and thread management in operating systems, explaining the concepts of processes, threads, inter-process communication, and process scheduling. It details the structure and states of processes, the creation and termination of processes, and the characteristics of threads, highlighting their shared resources and differences from processes. Additionally, it covers the significance of the Process Control Block (PCB) and the advantages of using threads for efficient resource management.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76
Prepared By: Jerusalem Y.
Process and Thread Management 5/7/2019 1
Process Concepts Threads concept Inter-process communication Process scheduling Deadlock
Process and Thread Management 5/7/2019 2
Early systems allowed only One program at a time was executed and a single program has a complete control of the system and had access to all the system's resources. But Modern OS’s allow multiple programs to be loaded in to memory and to be executed concurrently. This requires firm control and compartmentalization over execution of programs and these needs resulted in the notion of a process. A process is the unit of work in a modern time-sharing system. i.e. it is Program in execution
Process and Thread Management 5/7/2019 3
OS consists of a collection of processes 1. OS processes executes system code. 2. User processes executes user code. Conceptually, each process has its own virtual CPU but in reality the real CPU is switched among processes. This is called Multiprogramming. A process is more than the program code, which is sometimes known as the text section. Process (task or job) includes the current activity as represented by Program counter (PC) and the contents of the processor’s register.
Process and Thread Management 5/7/2019 4
The components of a process are • The program to be executed (Text Section) • The data on which the program will execute (Data Section which contains global variables) • The resources required by the program– such as memory and file (s) • The status of execution : Program Counter (PC) which contains temporary data such as function parameters, return addresses, and local variables A process may also include a heap, which is a memory that is dynamically allocated during process run time.
Process and Thread Management 5/7/2019 5
A program by itself is not a process A program is a passive entity, such as a file containing a list of instructions stored on disk (often called an executable file) Whereas a process is an active entity, with a program counter specifying the next instruction to execute and a set of associated resources. A program becomes a process when an executable file is loaded into memory. Two common techniques for loading executable files are double-clicking an icon representing the executable file and entering the name of the executable file on the command line ( Example prog1.exe)
Process and Thread Management 5/7/2019 6
Multiple processes may be associated with the same program. However, they have separate execution sequences. For instance, several users may be running different copies of the mail program, or the same user may invoke many copies of the Web browser program. Each of these is a separate process and although the text sections are equivalent, the data, heap, and stack sections vary.
Process and Thread Management 5/7/2019 7
Although each process is an independent entity, with its own program counter registers, stack, open files, alarms, and other internal state, processes often need to interact, communicate, and synchronize with other processes. One process may generate some output that another process uses as input. As a process executes it changes state. The state of a process is defined in part by the current activity of that process. Each process can be in any of the following states:
Process and Thread Management 5/7/2019 8
1. New:- A process has just been created 2. Running:- The process is executing and it is using the CPU 3. Waiting:- The process is waiting for an event (I/O completion, a signal). 4. Ready:- The process is temporarily stopped to let another process run and is waiting to be assigned to a processor. 5. Terminated:- The process has finished execution.
Process and Thread Management 5/7/2019 9
Diagram of Process State
Process and Thread Management 5/7/2019 10
Null New :- a new process is created to execute the program • New batch job, log on • Created by OS to provide the service New ready:- OS will move a process from prepared to ready state when it is prepared to take additional process. Ready Running:- when it is a time to select a new process to run, the OS selects one of the process in the ready state. Running terminated:- The currently running process is terminated by the OS if the process indicates that it has completed, or if it aborts.
Process and Thread Management 5/7/2019 11
Running Ready: -The process has reached the maximum allowable time or interrupt. Running Waiting: -A process is put in the waiting state, if it requests something for which it must wait. Example: System call request. Waiting Ready:- A process in the waiting state is moved to the ready state, when the event for which it has been waiting occurs. Ready Terminated:- If a parent terminates, child process should be terminated Waiting Terminated:- If a parent terminates, child process should be terminated
Process and Thread Management 5/7/2019 12
When an operating system is booted, often several processes (either foreground or/and back ground processes) are created. Foreground processes, are processes that interact with (human) users and perform work for them. Background processes, which are not associated with particular users, but instead have some specific function. Processes that stay in the background to handle some activity such as web pages, printing, and so on are called daemons. In addition to the processes created at boot time, new processes can be created afterward as well. A new process is created by having an existing process (a running user process, a system process invoked from the keyboard or mouse, or a batch manager process) execute a process creation system call.
Process and Thread Management 5/7/2019 13
The system call tells the operating system to create a new process and indicates, directly or indirectly, which program to run in it. • Assigns unique id, Memory Space and PCB (Process Control Block) is initialized • The creating process is called parent process. • Parent process creates children processes, which, in turn create other processes, forming a tree of processes. • Most operating systems (including UNIX and the Windows family of OS) identify processes according to a unique process identifier(pid), which is typically an integer number
Process and Thread Management 5/7/2019 14
In UNIX page daemon, swapper, and init are children of root process and users are children of init process. A daemon is a long-running background process that answers requests for services. Swapper process has the PID 0. Every process except process 0 (the swapper) is created when another process executes the fork() system call. init(short for initialization) is the first process started during booting of the computer system. Init is a daemon process that continues running until the system is shut down.
Process and Thread Management 5/7/2019 15
A process needs certain resources to accomplish its task. • CPU time, memory, files, I/O devices. There are four principal events that cause processes to be created: 1. System initialization. 2. Execution of a process creation system call by a running process. 3. A user request to create a new process. 4. Initiation of a batch job.
Process and Thread Management 5/7/2019 16
When a process creates a new process, 1. Resource sharing possibilities Parent and children share all resources. Children share subset of parent’s resources. Parent and child share no resources. 2. Execution possibilities:- When a process creates a new process, two possibilities exist in terms of execution:
Process and Thread Management 5/7/2019 17
The parent continues to execute concurrently with its children. The parent waits until some or all of its children have terminated. There are also two possibilities in terms of the address space
of the new process:
• The child process is a duplicate of the parent process (it
has the same program and data as the parent).
• The child process has a new program loaded into it
Process and Thread Management 5/7/2019 18
Process Trees of UNIX and Solaris Systems
Process and Thread Management 5/7/2019 19
After a process has been created, it starts running and does whatever its job is. Processes terminate usually due to one of the f.f conditions: i. Normal exit (voluntary):- Process executes last statement and
asks the operating system to delete it (exit).
• Output data from child to parent.
• Process’ resources are de-allocated by operating system.
ii.Error exit (voluntary)
iii.Fatal error (involuntary): is an error that causes a program
to abort and may return the user to the operating system.
Process and Thread Management 5/7/2019 20
iv. Killed by another process (involuntary): Parent may
terminate the execution of children processes (abort).
Parent is exiting. • Operating system does not allow child to continue if its parent terminates. • Cascading termination. Most processes terminate because they have done their work.
Process and Thread Management 5/7/2019 21
A parent may terminate the execution of one of its children for a variety of reasons, such as these: • The child has exceeded its usage of some of the resources
that it has been allocated.
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not
allow a child to continue if its parent terminates.
The phenomena where all the children processes terminate when their parent process terminates is called cascading termination, is normally initiated by the OS.
Process and Thread Management 5/7/2019 22
Each process is represented in the operating system by a Process Control Block (PCB), also called a task control block. A PCB is the data structure that the OS uses to represent a process. The OS groups all information that needs about particular process in the PCB.
Process and Thread Management 5/7/2019 23
1. Process state:- can be new, ready, running, waiting, terminated. 2. Pointer:- points to another process control block. Pointer is used for maintaining the scheduling list. 3. Program counter:- indicates the address of the next instruction to be executed. 4. CPU registers:- They include accumulators, index registers, stack pointers, and general-purpose registers, plus any condition-code information. • The registers vary in number and type, depending on the computer architecture.
Process and Thread Management 5/7/2019 24
5. Memory-management information:- includes the value of base and limit register, the page tables, or the segment tables, depending on the memory system used by the operating system • This information is useful for reallocating the memory when the process terminates. 6. CPU scheduling information:- includes the CPU scheduling information such as a process priority, pointers to scheduling queues, and any other scheduling parameters for each and every process.
Process and Thread Management 5/7/2019 25
7. Accounting information:- includes the amount of CPU and real time used, time limits, job or process numbers, account numbers etc. 8. I/O status information:- includes the list of I/O devices allocated to the process, a list of open files, and so on. 9. Event information:- for a process in the blocked (wait) state this field contains information concerning the event for which the process is waiting.
Process and Thread Management 5/7/2019 26
CPU Switch From Process to Process
Process and Thread Management 5/7/2019 27
A thread is a path of execution within a process. • If there are many processes sharing many resources, then the mechanism becomes bulky and difficult to handle. • Threads are created to make this kind of resource sharing simple & efficient Each process has the following two characteristics: 1. Unit of resource ownership:- When a new process is created , an address space containing program text and data , as well as other resources(files, child processes, pending alarms, signal handlers, accounting information) is allocated. • The unit of resource ownership is usually referred to as a Task or a Process.
Process and Thread Management 5/7/2019 28
2. Unit of dispatching:- is a thread of execution, usually shortened to just thread The thread has a program counter, that keeps track of which instruction to execute next. It has registers, which hold its current working variables.
It has a stack, which contains the execution history, with one
frame for each procedure called but not yet returned from. The unit of dispatching is usually referred to a Thread or a Light-Weight Process (LWP).
Process and Thread Management 5/7/2019 29
A thread is a basic unit of CPU utilization that consists of: • Thread id • Execution State • Program counter • Register set • Stack Threads belonging to the same process share: • its code • its data section • other OS resources such as open files and signals
Process and Thread Management 5/7/2019 30
A thread of execution is the smallest unit of processing that can be scheduled by an OS. The implementation of threads and processes differs from one OS to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources. Like process states, threads also have states: • New, Ready, Running, Waiting and Terminated Like processes, the OS will switch between threads (even though they belong to a single process) for CPU usage. Like process creation, thread creation is supported by APIs
Process and Thread Management 5/7/2019 31
Creating threads is inexpensive (cheaper) compared to processes • They do not need new address space, global data, program code or operating system resources • Context switching is faster as the only things to save/restore are program counters, registers and stacks
Process and Thread Management 5/7/2019 32
Similarities 1.Both share CPU and only one thread/process is active (running) at a time. 2.Like processes, threads within a process execute sequentially. 3.Like processes, thread can create children. 4.Like process, if one thread is blocked, another thread can run. Difference 1. Unlike processes, threads are not independent of one another. 2. Unlike processes, all threads can access every address in the task. 3. Unlike processes, threads are designed to assist one other.
Process and Thread Management 5/7/2019 33
In a word processor, • A background thread may check spelling and grammar, while a foreground thread processes user input (keystrokes), while yet a third thread loads images from the hard drive, and a fourth does periodic automatic backups of the file being edited In a spreadsheet program, • One thread could display menus and read user input, while another thread executes user commands and updates the spreadsheet
Process and Thread Management 5/7/2019 34
Multithreading refers to the ability on an operating system to support multiple threads of execution within a single process. A traditional (heavy weight) process has a single thread of control There’s one program counter and a set of instructions carried out at a time If a process has multiple thread of control, it can perform more than one task at a time Each threads have their own program counter, stacks and registers But they share common code, data and some operating system data structures like files
Process and Thread Management 5/7/2019 35
Multitasking is the ability of an OS to execute more than one program simultaneously. • Though we say so but in reality no two programs on a single processor machine can be executed at the same time. The CPU switches from one program to the next so quickly that appears as if all of the programs are executing at the same time. Multithreading is the ability of an OS to execute the different parts of the program, called threads, simultaneously. The program has to be designed well so that the different threads do not interfere with each other.
Process and Thread Management 5/7/2019 36
Individual programs are all isolated from each other in terms of their memory and data, but individual threads are not as they all share the same memory and data variables. • Hence, implementing multitasking is relatively easier in an operating system than implementing multithreading. Traditionally there is a single thread of execution per process. Example: • MSDOS supports a single user process and single thread. • Older UNIX supports multiple user processes but only support one thread per process.
Process and Thread Management 5/7/2019 37
Multithreading • Java run time environment is an example of one process with multiple threads. • Examples of OS supporting multiple processes, with each process supporting multiple threads: Windows 2000, Solaris, Linux and Mac.
Process and Thread Management 5/7/2019 38
Combinations of Threads and Processes
Process and Thread Management 5/7/2019 39
Inter-process communication is simple and easy when used occasionally A process can be of two type: • Independent process. • Co-operating process. An independent process is not affected by the execution of other processes while a co-operating process can be affected by other executing processes. Though one can think that those processes, which are running independently, will execute very efficiently but in practical, there are many situations when co-operative nature can be utilised for increasing computational speed, convenience and modularity.
Process and Thread Management 5/7/2019 40
Inter process communication (IPC) is a mechanism which allows processes to communicate each other and synchronize their actions. The communication between these processes can be seen as a method of co-operation between them. Processes can communicate with each other using these two ways: 1. Shared memory 2. Message passing
Process and Thread Management 5/7/2019 41
Communication between processes using shared memory requires processes to share some variable and it completely depends on how programmer will implement it. One way of communication using shared memory can be imagined like this: Suppose process1 and process2 are executing simultaneously and they share some resources or use some information from other process, process1 generate information about certain computations or resources being used and keeps it as a record in shared memory.
Process and Thread Management 5/7/2019 42
When process2 need to use the shared information, it will check in the record stored in shared memory and take note of the information generated by process1 and act accordingly. Processes can use shared memory for extracting information as a record from other process as well as for delivering any specific information to other process.
Process and Thread Management 5/7/2019 43
Process and Thread Management 5/7/2019 44 In this method, processes communicate with each other without using any kind of shared memory. If two processes p1 and p2 want to communicate with each other, they proceed as follow: Establish a communication link (if a link already exists, no need to establish it again.) Start exchanging messages using basic primitives. We need at least two primitives: 1. Send(message, destination) or send(message) 2. Receive (message, host) or receive(message)
Process and Thread Management 5/7/2019 45
Process and Thread Management 5/7/2019 46 The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy. Process scheduling is an essential part of a multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing.
Process and Thread Management 5/7/2019 47
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each of the process states and PCBs of all processes in the same execution state are placed in the same queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new state queue. The Operating System maintains the following important process scheduling queues 1. Job queue − This queue keeps all the processes in the system. 2. Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting to execute. 3. Device queues − The processes which are blocked due to unavailability of an I/O device constitute this queue.
Process and Thread Management 5/7/2019 48
Process and Thread Management 5/7/2019 49 The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS scheduler determines how to move processes between the ready and run queues which can only have one entry per processor core on the system; in the above diagram, it has been merged with the CPU. Two-State Process Model Two-state process model refers to running and non-running states which are described below
Process and Thread Management 5/7/2019 50
1. Running When a new process is created, it enters into the system as in the running state. 2. Not Running Processes that are not running are kept in queue, waiting for their turn to execute. Each entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting queue. If the process has completed or aborted, the process is discarded. In either case, the dispatcher then selects a process from the queue to execute.
Process and Thread Management 5/7/2019 51
A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling algorithms. Some of the popular process scheduling algorithms are 1. First-Come, First-Served (FCFS) Scheduling 2. Shortest-Job-Next (SJN) Scheduling 3. Priority Scheduling 4. Round Robin(RR) Scheduling
Process and Thread Management 5/7/2019 52
These algorithms are either non-pre-emptive or pre-emptive. Non-pre-emptive algorithms are designed so that once a process enters the running state, it cannot be pre-empted until it completes its allotted time, whereas the pre-emptive scheduling is based on priority where a scheduler may pre- empt a low priority running process anytime when a high priority process enters into a ready state.
Process and Thread Management 5/7/2019 53
In the "First come first serve" scheduling algorithm, as the name suggests, the process which arrives first, gets executed first, or we can say that the process which requests the CPU first, gets the CPU allocated first. First Come First Serve, is just like FIFO(First in First out) Queue data structure, where the data element which is added to the queue first, is the one who leaves the queue first. This is used in Batch Systems. It's easy to understand and implement programmatically, using a queue data structure, where a new process enters through the tail of the queue, and the scheduler selects process from the head of the queue. A perfect real life example of FCFS scheduling is buying tickets at ticket counter.
Process and Thread Management 5/7/2019 54
For every scheduling algorithm, Average waiting time is a crucial parameter to judge it's performance. AWT or Average waiting time is the average of the waiting times of the processes in the queue, waiting for the scheduler to pick them for execution. Lower the Average Waiting Time, better the scheduling algorithm. Example: Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution in the same order, with Arrival Time 0, and given Burst Time, let's find the average waiting time using the FCFS scheduling algorithm.
Process and Thread Management 5/7/2019 55
Process and Thread Management 5/7/2019 56 The average waiting time will be 18.75 ms. For the above given processes, first P1 will be provided with the CPU resources, Hence, waiting time for P1 will be 0 P1 requires 21 ms for completion, hence waiting time for P2 will be 21 ms Similarly, waiting time for process P3 will be execution time of P1 + execution time for P2, which will be (21 + 3) ms = 24 ms For process P4 it will be the sum of execution times of P1, P2 and P3.
Process and Thread Management 5/7/2019 57
Shortest Job First scheduling works on the process with the shortest burst time or duration first. This is the best approach to minimize waiting time. This is used in Batch Systems. It is of two types: ◦ Non Pre-emptive ◦ Pre-emptive To successfully implement it, the burst time/duration time of the processes should be known to the processor in advance, which is practically not feasible all the time. This scheduling algorithm is optimal if all the jobs/processes are available at the same time. (either Arrival time is 0 for all, or Arrival time is same for all)
Process and Thread Management 5/7/2019 58
Consider the below processes available in the ready queue for execution, with arrival time as 0 for all and given burst times.
Process and Thread Management 5/7/2019 59
As you can see in the GANTT chart above, the process P4 will be picked up first as it has the shortest burst time, then P2, followed by P3 and at last P1. We scheduled the same set of processes using the First come first serve algorithm , and got average waiting time to be 18.75 ms, whereas with SJF, the average waiting time comes out 4.5 ms.
Process and Thread Management 5/7/2019 60
Priority is assigned for each process. Process with highest priority is executed first and so on. Processes with same priority are executed in FCFS manner. Priority can be decided based on memory requirements, time requirements or any other resource requirement.
Process and Thread Management 5/7/2019 61
Process and Thread Management 5/7/2019 62 A fixed time is allotted to each process, called quantum, for execution. Once a process is executed for given time period that process is pre-empted and other process executes for given time period. Context switching is used to save states of pre-empted processes.
Process and Thread Management 5/7/2019 63
Process and Thread Management 5/7/2019 64 Schedulers are special system software which handle process scheduling in various ways. Their main task is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of three types 1. Long-Term Scheduler 2. Short-Term Scheduler 3. Medium-Term Scheduler
Process and Thread Management 5/7/2019 65
It is also called a job scheduler. A long-term scheduler determines which programs are admitted to the system for processing. It selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling. The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system.
Process and Thread Management 5/7/2019 66
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating systems have no long term scheduler. When a process changes the state from new to ready, then there is use of long-term scheduler.
Process and Thread Management 5/7/2019 67
It is also called as CPU scheduler. Its main objective is to increase system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects a process among the processes that are ready to execute and allocates CPU to one of them. Short-term schedulers, also known as dispatchers, make the decision of which process to execute next. Short-term schedulers are faster than long-term schedulers.
Process and Thread Management 5/7/2019 68
Medium-term scheduling is a part of swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the swapped out-processes. A running process may become suspended if it makes an I/O request. A suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other processes, the suspended process is moved to the secondary storage.
Process and Thread Management 5/7/2019 69
This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.
Process and Thread Management 5/7/2019 70
Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler It is a job scheduler It is a CPU scheduler It is a process swapping scheduler. Speed is lesser than short Speed is fastest among Speed is in between both term scheduler other two short and long term scheduler. It controls the degree of It provides lesser control It reduces the degree of multiprogramming over degree of multiprogramming. multiprogramming It is almost absent or It is also minimal in time It is a part of Time minimal in time sharing sharing system sharing systems. system It selects processes from It selects those processes It can re-introduce the pool and loads them into which are ready to process into memory and memory for execution execute execution can be continued. Process and Thread Management 5/7/2019 71 Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. The cause of deadlocks: Each process needing what another process has. This results from sharing resources such as memory, devices, links. Under normal operation, a resource allocations proceed like this: 1. Request a resource. 2. Use the resource. 3. Release the resource.
Process and Thread Management 5/7/2019 72
Traffic Jam
Dining Philosophers
Process and Thread Management 5/7/2019 73
"It takes money to make money". You can't get a job without experience; you can't get experience without a job.
Process and Thread Management 5/7/2019 74
Necessary conditions ALL of these four must happen simultaneously for a deadlock to occur: 1. Mutual exclusion One or more than one resource must be held by a process in a exclusive mode. 2. Hold and Wait A process holds a resource while waiting for another resource. 3. No Preemption There is only voluntary release of a resource - nobody else can make a process give up a resource. 4. Circular Wait Process A waits for Process B waits for Process C .... waits for Process A.
Process and Thread Management 5/7/2019 75
Deadlock problems can be handled in one of the following three ways: 1. Using a protocol that prevents or avoids deadlock by ensuring that a system will never enter a deadlock state, deadlock prevention and deadlock avoidance scheme are used. 2. Allow the system to enter a deadlock state, detect it and then recover. (Deadlock Recovery) 3. Ignore the problem and pretend that deadlocks never occur in the system; • Used by most operating systems, including UNIX and Windows. • It is then up to the application developer to write programs that handle deadlocks. Process and Thread Management 5/7/2019 76