0% found this document useful (0 votes)
31 views

Processes: 3.1 Process Concept

The document discusses processes in an operating system. It defines a process as a program in execution that requires resources like CPU time and memory. It describes the key responsibilities of an operating system in managing processes, including process creation and deletion, scheduling, and synchronization. It explains the different states a process can be in, and the process control block (PCB) used to represent each process. It also discusses process scheduling queues and the different types of schedulers (long-term, short-term, medium-term). Finally, it covers operations on processes like process creation where a parent process spawns child processes, and process termination.

Uploaded by

excitekarthik
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Processes: 3.1 Process Concept

The document discusses processes in an operating system. It defines a process as a program in execution that requires resources like CPU time and memory. It describes the key responsibilities of an operating system in managing processes, including process creation and deletion, scheduling, and synchronization. It explains the different states a process can be in, and the process control block (PCB) used to represent each process. It also discusses process scheduling queues and the different types of schedulers (long-term, short-term, medium-term). Finally, it covers operations on processes like process creation where a parent process spawns child processes, and process termination.

Uploaded by

excitekarthik
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

3.

PROCESSES
3.1 PROCESS CONCEPT
• Process is a program in execution.
• A Process will need certain resources (such as CPU time, memory, files and
I/O devices) to accomplish its task.
• The operating system is responsible for the following activities in
connection with process management.
 Process creation and deletion.
 Scheduling of processes.
 Provision of mechanisms for:
 process synchronization
 process communication
 Deadlock handling for processes
3.1.1 The Process
• An operating system executes a variety of programs:
 Batch system – job
 Time-shared systems – user programs or tasks
• In general a process consists of following sections.
1. Text section: It is basically the program code associated
with process.
2. Data section: Consists of global variables.
3. Stack section: Consists of return addresses of sub-
routine call, local variables of a sub-routine.
3.1.2 Process State
As a process executes, it changes state. The state of a process is defined,
in part, by the current activity of that process.
• New: The process is being created.
• Running: Instructions are being executed.
• Waiting: The process is waiting for some event to occur.
• Ready: The process is waiting to be assigned to a process.
• Terminated: The process has finished execution.
3.1.3 Process Control Block (PCB)
• Each process is represented in the operating system by a process control block (PCB)-
also called a task control block.
• A PCB is shown in figure
• Process state: The state may be new, ready, running, waiting, halted, and SO on.
• Program counter: The counter indicates the address of the next instruction to be
executed for this process.
• CPU registers: The registers vary in number and type,
depending on the computer architecture. They include
accumulators, index registers, stack pointers, and
general-purpose registers, plus any condition-code information.
• CPU-scheduling information: This information includes a process
priority, pointers to scheduling queues, and any other scheduling
parameters.
• Memory-management information: It includes value of the base
and limit registers, the page tables, or the segment tables, depending
on the memory system used by OS.
• Accounting information: It includes the amount of CPU and real time used, time limits,
account numbers, process numbers and so on.
• I/O status information: The information includes the list of I/O devices allocated to this
process, a list of open files, and so on.
The PCB simply serves as the repository for any information that may vary from
process to process.
3.2 PROCESS SCHEDULING
• Process scheduling is an essential part of a Multiprogramming operating
system.
• Multiprogramming systems allow more than one process to be loaded into
the executable memory at a time.
• Then the loaded process shares the CPU using time multiplexing.
3.2.1 Scheduling Queues
• The operating system also maintains three types of queues
1. Job Queue, 2.Ready Queue, 3.Device Queue.
• When the process enters into the system, then this process is put into a
job queue.
• Job queue - consists of all processes in the system.
• Ready Queue -The processes that are residing in main memory and are
ready and waiting to execute.
• Device queue – the list of processes waiting for a particular I/O device
.Each device has its own device queue.
• Queue is represented by rectangular box.
• The circles represent the resources that serve the queues.
• The arrows indicate the process flow in the system
• A new process is initially put in the ready queue. It waits in the ready queue
until it is selected for execution.
• Once the process is assigned to the CPU and is executing, one of several
events could occur:
• The process could issue an I/O request and then it would be placed in an I/O
queue.
• The process could create new subprocess and wait for its termination.
• The process could be removed forcibly from the CPU, as a result of interrupt
and put back in the ready queue.
3.2.2 Schedulers
• A process migrates between the various scheduling queues throughout its
lifetime.
• The operating system must select, for scheduling purposes, processes from
these queues in some fashion.
• The selection process is carried out by the appropriate scheduler.
• Schedulers selects the program based on
 I/O bound: A process spends more time doing I/O than computation.
 CPU bound : A process spends more time doing computations than I/O
• The system with the best performance will have a combination of CPU-bound
and I/O-bound processes.
Schedulers are of three types:
1.Long Term Scheduler
2.Short Term Scheduler
3.Medium Term Scheduler
1.Long Term Scheduler
• It is also called job scheduler.
• The long-term scheduler selects processes from this pool and loads them
into memory for execution.
• Long-term scheduler is invoked very infrequently (seconds, minutes) 
(may be slow).
• The long-term scheduler controls the degree of multiprogramming (the
number of processes in memory).
• If the degree of multiprogramming is stable, then the average rate of
process creation must be equal to the average departure rate of processes
leaving the system.
• The long-term scheduler should select a good process mix of I/O-bound
and CPU-bound processes.

2. Short Term Scheduler


• It is also called as CPU scheduler.
• The short-term scheduler, selects from among the processes that are
ready to execute, and allocates the CPU to one of them.
• Short-term scheduler is invoked very frequently (milliseconds)  (must be
fast).
3. Medium Term Scheduler
• It is a process swapping scheduler.
• The medium-term scheduler removes processes from memory.
• It reduces the degree of multi-programming.
• At later time, the process can be reintroduced into memory and its
execution can be continued where it left off. This scheme is called
swapping.
• The process is swapped out, and is later swapped in, by the medium-term
scheduler.
• Swapping may be necessary to improve the process mix.
3.2.3 Context Switch
• When CPU switches to another process, the system must save the state of
the old process and load the saved state for the new process. It is known as
context switch.
• Context-switch times are highly dependent on hardware support.
3.3 OPERATIONS ON PROCESSES
• The processes in the system can execute concurrently, and they must be created
and deleted dynamically.
• The operating system must provide a mechanism (or facility) for process
creation and termination.
3.3.1 Process Creation
• A process may create several new processes, via a create-process system call,
during the course of execution.
• The creating process is called a parent process, whereas the new processes are
called the children of that process.
• Each of these new processes may in turn create other processes, forming a tree
of processes.
• A process needs certain resources like CPU time, memory, files and I/O devices
to perform the task.
• These resources may be obtained directly from the operating system or from
the parent process.
• Parent process may partition some of its resources and allocate to its children or
may share the resources with its children.
• Parent process may restrict the child process to the subset of resources.
• Because of these restrictions on the resources, the child process is prevented
from creating too many processes.
• When a process creates a new process, two possibilities exist in terms of
execution:
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
• There are also two possibilities in terms of the address space of the new
process:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
• Address space of the child process is determined using following ways.
1. The parent process address space may be duplicated and child
process also executes the same.
2. The child process may have separate address space and its own
program is loaded into it.
3.3.2 Process termination
• Process termination is done in general using exit system call.
• Once a process completes the execution of the last statement of the
program then it asks the operating system to delete itself.
• A terminating process may return some data to its parent process using
wait system call.
• The resources allocated to a process such as, physical and virtual memory,
open files and I/O buffers are deallocated by the operating system.
• Sometimes a process can terminate another process using abort system
call.
• A parent may terminate the execution of one of its children for a variety of
reasons, such as these:
 Child has exceeded allocated resources.
 Task assigned to child is no longer required.
 When parent process is exiting. Operating system does not allow
child to continue if its parent terminates. Terminating all processes
that are belonging to the process is called as Cascading termination
C program for forking a separate process
#include <stdio .h>
Void main (int argc, char *argv[])
{ int pid; /* fork another process */
System
pid = fork (); call
if (pid < 0) { /*error occurred */
fprintf (stderr, “Fork Failed”);
exit (-1);}
else if (pid == 0) { /* child process */
execlp (“/bin/ls”, “ls”, NULL); }
else {
/* parent process */ /* parent will wait for the child to complete */
wait (NULL);
printf (“Child Complete”);
exit (0);}}
3.4 COOPERATING PROCESSES
• The concurrent processes executing in the operating system may be
either Independent processes or cooperating processes.
• Any process that does not share any data (temporary or persistent) with
any other process is independent
• If a process is independent, it cannot affect or affected by the other
processes.
• Any process that shares data with other processes is a cooperating
process.
• If a process is cooperating, it can affect or be affected by the other
processes.
Advantages of cooperating process:
Information sharing
Computation speedup
Modularity
Convenience
3.5 INTER PROCESS COMMUNICATION (IPC)
• IPC provides a mechanism to allow processes to communicate and to
synchronize their actions without sharing the same address space.
• IPC is particularly useful in a distributed environment where the
communicating processes may reside on different computers connected
with a network.
3.5.1Message-Passing System
• The function of a message system is to allow processes to communicate
with one another without the need to resort to shared data.
• IPC facility provides at least the two operations: send(message) and
receive(message).
• several methods for logically implementing a link and the send/receive
operations:
 Direct or indirect communication
 Symmetric or asymmetric communication
 Automatic or explicit buffering
 Send by copy or send by reference
 Fixed-sized or variable-sized messages
3.5.2 Naming
• Processes that want to communicate must have a way to refer to each other.
They can use either direct or indirect communication.
3.5.2.1 Direct Communication
• In Direct communication, processes address each other by their PID
assigned to them by the operating system.
send (P, message) – is used for sending a message to process P.
receive (Q, message) – is used for receiving a message from Q.

Properties of communication link


• Links are established automatically.
• A link is associated with exactly one pair of communicating processes.
• Between each pair there exists exactly one link.
• The link may be unidirectional, but is usually bi-directional.
• In the above scheme both the processes know about each other. It is called
symmetric way of addressing.
• In another scheme, asymmetric in nature, where only one processes need to
know the name of other process.
3.5.2.2 Indirect communication
• In indirect communication, the messages are sent and received via mailbox
(also known as port)-a repository of interprocess messages.
• A process can communicate with some other process via a number of
different mailboxes.
• Two processes can communicate only if they share a mailbox.

send (A, message) -Send a message to mailbox A.


receive (A, message) -Receive a message from mailbox A.

• Operating system allows the process to do the following operations:


 create a new mailbox
 send and receive messages through mailbox
 destroy a mailbox
3.5.3 Synchronization
• Communication between processes takes place by calls to send and receive
primitives.
• Message passing may be either blocking or nonblocking-also known as
synchronous and asynchronous.
 Blocking send: The sending process is blocked until the message is
received by the receiving process or by the mailbox.
 Nonblocking send: The sending process sends the message and
resumes Operation.
 Blocking receive: The receiver blocks until a message is available.
 Nonblocking receive: The receiver retrieves either a valid message or
a null.
3.5.4 Buffering
• Whether the communication is direct or indirect, messages exchanged by
communicating Processes reside in a temporary queue.
• queue can be implemented in three ways:
 Zero capacity – 0 messages
Sender must wait for receiver (rendezvous).
 Bounded capacity – finite length of n messages
Sender must wait if link full.
 Unbounded capacity – infinite length
Sender never waits.
3.6 COMMUNICATION IN CLIENT - SERVER SYSTEMS
• A running process on a system can communicate with another process, running
on a remote system connected via network, with a help of communication
mechanisms, including
• Sockets
• Remote procedure call
• Remote method invocation
3.6.1 Sockets
• A socket is defined as an endpoint of the communication path between two
processes.
• A pair of processes communicating over a network employs a pair of sockets-
one for each process.
• A socket is made up of an IP address concatenated with a port number.
• The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.
• Sockets use client-server architecture.
• Java provides three types of sockets.
 Connection oriented (TCP) sockets (Socket class).
 Connectionless (UDP) sockets (DatagramSocket class).
 Multicast socket (sub class of DatagramSocket class).
3.6.2 Remote Procedure Call (RPC)
• RPC is a communication mechanism that allows a process to call a
procedure on a remote system connected via network.
• The calling process (client) can call the procedure on the remote host
(server) in the same way as it would call the local procedure.
• The RPC system provides the communication between client and server by
providing a stub on both client and server.
• For each remote procedure, the RPC system provides a separate stub on the
client side.
3.6.3 Remote Method Invocation (RMI)
• RMI is java-based approach that provides remote communication between
programs written in a java programming language.
• It allows an object executing in one Java Virtual Machine (JVM) to invoke methods
on an object executing in another JVM either on the same computer or on some
remote host connected via network.

RMI and RPCs differ in two fundamental ways.


• First, RPCs support procedural programming whereby only remote procedures or
functions may be called.
RMI is object-based: It supports invocation of methods on remote objects.
• Second, the parameters to remote procedures are ordinary data structures in RPC;
with RMI it is possible to pass objects as parameters to remote methods.
• By allowing a Java program to invoke methods on remote objects,
• RMI makes it possible for users to develop Java applications that are distributed
across a network.
• To make remote methods transparent to both the client and the server, RMI
implements the remote object using stubs and skeletons.
• A stub is a proxy for the remote object; it resides with the client. The skeleton
is responsible for unmarshalling the parameters and invoking the desired
method on the server.
• A client wishes to invoke a method on a remote object Server with the
signature someMethod(object , Object) that returns a boolean value.
• The client executes the statement boolean val = Server. someMethod(A, B) ;

You might also like