0% found this document useful (0 votes)
8 views

unit 3 first half

The document discusses the concepts of processes and threads in operating systems, explaining the differences between them, including their states, control blocks, scheduling, and context switching. It outlines the roles of various schedulers and the criteria for effective scheduling, as well as the advantages and disadvantages of user-level and kernel-level threads. Additionally, it highlights the importance of threads in improving application performance and resource utilization.

Uploaded by

vibha srivastava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

unit 3 first half

The document discusses the concepts of processes and threads in operating systems, explaining the differences between them, including their states, control blocks, scheduling, and context switching. It outlines the roles of various schedulers and the criteria for effective scheduling, as well as the advantages and disadvantages of user-level and kernel-level threads. Additionally, it highlights the importance of threads in improving application performance and resource utilization.

Uploaded by

vibha srivastava
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 11

Process Concept

 A batch system executes jobs, whereas a time-shared system has user


programs, or tasks. The terms job and process are used almost
interchangeably.
 A process is a program in execution. A process is more than the program
code, which is sometimes known as the text section. It also includes the
current activity, as represented by the value of the program counter and the
contents of the processor's registers.
 A process generally also includes the process stack, which contains
temporary data (such as function parameters, return addresses, and local
variables), and a data section, which contains global variables.

Process State

As a process executes, it changes state. The state of a process is defined in part by


the current activity of that process. Each process may be in one of the following
states:
• New. The process is being created.
• Running. Instructions are being executed.
• Waiting. The process is waiting for some event to occur (such as an I/O
completion or reception of a signal).
• Ready. The process is waiting to be assigned to a processor.
• Terminated. The process has finished execution.
It is important to realize that only one process can be running on any processor at
any instant.

Process Control Block

Each process is represented in the operating system by a process control block


(PCB)-also called a task control block. A PCB is shown in Figure. It contains
many pieces of information associated with a specific process, including these:

• Process state. The state may be new, ready, running, waiting, halted, and so on.
• Program counter. The counter indicates the address of the next instruction to be
executed for this process.
• CPU registers. The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack pointers,
and general-purpose registers, plus any condition-code information. Along with the
program counter, this state information must be saved when an interrupt occurs, to
allow the process to be continued correctly afterward .
• Cpu-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used by the operating system.
• Accounting information. This information includes the amount of CPU and real
time used, time limits, account numbers, job or process numbers, and so on.
• I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on. In brief, the PCB simply
serves as the repository for any information that may vary from process to process.

Process Scheduling

The objective of multiprogramming is to have some process running at all times, to


maximize CPU utilization. The objective of time sharing is to switch the CPU
among processes so frequently that users can interact with each program while it is
running. To meet these objectives, the process scheduler selects an available
process (possibly from a set of several available processes) for program execution
on the CPU. For a single-processor system, there will never be more than one
running process. If there are more processes, the rest will have to wait until the
CPU is free and can be rescheduled.

Scheduling Queues
As processes enter the system, they are put into a job queue, which consists of all
processes in the system. The processes that are residing in main memory and are
ready and waiting to execute are kept on a list called the ready queue. This queue is
generally stored as a linked list. A ready-queue header contains pointers to the first
and final PCBs in the list. Each PCB includes a pointer field that points to the next
PCB in the ready queue.
The system also includes other queues. When a process is allocated the CPU, it
executes for a while and eventually quits, is interrupted, or waits for the occurrence
of a particular event, such as the completion of an I/O request.
Suppose the process makes an I/O request to a shared device, such as a disk. Since
there are many processes in the system, the disk may be busy with the I/O request
of some other process. The process therefore may have to wait for the disk. The list
of processes waiting for a particular I/O device is called a device queue. Each
device has its own device queue.

A common representation for a discussion of process scheduling is a queueing


diagram, such as that in Figure . Each rectangular box represents a queue. Two
types of queues are present: the ready queue and a set of device queues. The circles
represent the resources that serve the queues, and the arrows indicate the flow of
processes in the system.
A new process is initially put in the ready queue. It waits there until it is selected
for execution, or is dispatched. Once the process is allocated the CPU and is
executing, one of several events could occur:
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new sub process and wait for the sub process's
termination.
• The process could be removed forcibly from the CPU, as a result of an interrupt,
and be put back in the ready queue.
In the first two cases, the process eventually switches from the waiting state to the
ready state and is then put back in the ready queue. A process continues this cycle
until it terminates, at which time it is removed from all queues and has its PCB and
resources de-allocated.

Schedulers
A process migrates among the various scheduling queues throughout its lifetime.
The operating system must select, for scheduling purposes, processes from these
queues in some fashion. The selection process is carried out by the appropriate
scheduler.

The long-term scheduler, or job scheduler, selects processes from this pool and
loads them into memory for execution.

The short-term scheduler, or CPU scheduler, selects from among the processes
that are ready to execute and allocates the CPU to one of them.

Some operating systems, such as time-sharing systems, may introduce an


additional, intermediate level of scheduling. This medium-term scheduler is
diagrammed in Figure 3.8. The key idea behind a medium-term scheduler is that
sometimes it can be advantageous to remove processes from memory (and from
active contention for the CPU) and thus reduce the degree of multiprogramming.
Later, the process can be reintroduced into memory, and its execution can be
continued where it left off. This scheme is called swapping. The process is
swapped out, and is later swapped in, by the medium-term scheduler. Swapping
may be necessary to improve the process mix or because a change in memory
requirements has overcommitted available memory, requiring memory to be freed
up.

The primary distinction between these two schedulers lies in frequency of


execution. The short-term scheduler must select a new process for the CPU
frequently. A process may execute for only a few milliseconds before waiting for
an I/O request.
The long-term scheduler executes much less frequently; minutes may separate the
creation of one new process and the next. The long-term scheduler controls the
degree of multiprogramming (the number of processes in memory). If the degree of
multiprogramming is stable, then the average rate of process creation must be
equal to the average departure rate of processes leaving the system.

Context Switch
 interrupts cause the operating system to change a CPU from its current task
and to run a kernel routine. Such operations happen frequently on general-
purpose systems.
 When an interrupt occurs, the system needs to save the current context of the
process currently running on the CPU so that it can restore that context when
its processing is done, essentially suspending the process and then resuming
it.
 The context is represented in the PCB of the process; it includes the value of
the CPU registers, the process state and memory-management information.
 Generically, we perform a state save of the current state of the CPU, be it in
kernel or user mode, and then a state restore to resume operations.
 Switching the CPU to another process requires performing a state save of the
current process and a state restore of a different process. This task is known
as a context switch. When a context switch occurs, the kernel saves the
context of the old process in its PCB and loads the saved context of the new
process scheduled to run. Context-switch time is pure overhead, because the
system does no useful work while switching. Its speed varies from machine
to machine, depending on the memory speed, the number of registers that
must be copied, and the existence of special instructions (such as a single
instruction to load or store all registers). Typical speeds are a few
milliseconds.
 Context-switch times are highly dependent on hardware support. For
instance, some processors (such as the Sun UltraSPARC) provide multiple
sets of registers. A context switch here simply requires changing the pointer
to the current register set. Of course, if there are more active processes than
there are register sets, the system resorts to copying register data to and from
memory, as before. Also, the more complex the operating system, the more
work must be done during a context switch. advanced memory-management
techniques may require extra data to be switched with each context. For
instance, the address space of the current process must be preserved as the
space of the next task is prepared for use. How the address space is
preserved, and what amount of work is needed to preserve it, depend on the
memory-management method of the operating system.
scheduling criteria
Various criteria or characteristics that help in designing a good scheduling
algorithm are:
 CPU Utilization − A scheduling algorithm should be designed so that CPU
remains busy as possible. It should make efficient use of CPU.
 Throughput − Throughput is the amount of work completed in a unit of
time. In other words throughput is the processes executed to number of jobs
completed in a unit of time. The scheduling algorithm must look to
maximize the number of jobs processed per time unit.
 Response time − Response time is the time taken to start responding to the
request. A scheduler must aim to minimize response time for interactive
users.
 Turnaround time − Turnaround time refers to the time between the
moment of submission of a job/ process and the time of its completion.
Thus how long it takes to execute a process is also an important factor.
 Waiting time − It is the time a job waits for resource allocation when
several jobs are competing in multiprogramming system. The aim is to
minimize the waiting time.
 Fairness − A good scheduler should make sure that each process gets its
fair share of the CPU.

Thread
 A thread is a flow of execution through the process code, with its own
program counter that keeps track of which instruction to execute next,
system registers which hold its current working variables, and a stack which
contains the execution history.
 A thread shares with its peer threads few information like code segment,
data segment and open files. When one thread alters a code segment
memory item, all other threads see that.
 A thread is also called a lightweight process. Threads provide a way to
improve application performance through parallelism. Threads represent a
software approach to improving performance of operating system by
reducing the overhead thread is equivalent to a classical process.
 Each thread belongs to exactly one process and no thread can exist outside
a process. Each thread represents a separate flow of control. Threads have
been successfully used in implementing network servers and web server.
They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.

Difference between Process and Thread


S.N. Process Thread

1 Process is heavy weight or resource Thread is light weight, taking lesser


intensive. resources than a process.

2 Process switching needs interaction Thread switching does not need to


with operating system. interact with operating system.

3 In multiple processing environments, All threads can share same set of open
each process executes the same code files, child processes.
but has its own memory and file
resources.

4 If one process is blocked, then no While one thread is blocked and


other process can execute until the waiting, a second thread in the same
first process is unblocked. task can run.

5 Multiple processes without using Multiple threaded processes use fewer


threads use more resources. resources.

6 In multiple processes each process One thread can read, write or change
operates independently of the others. another thread's data.

Advantages of Thread

 Threads minimize the context switching time.


 Use of threads provides concurrency within a process.
 Efficient communication.
 It is more economical to create and context switch threads.
 Threads allow utilization of multiprocessor architectures to a greater scale
and efficiency.
Types of Thread
Threads are implemented in following two ways −
 User Level Threads − User managed threads.
 Kernel Level Threads − Operating System managed threads acting on
kernel, an operating system core.
Difference between User-Level & Kernel-Level Thread

S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster to Kernel-level threads are slower to create


create and manage. and manage.

2 Implementation is by a thread Operating system supports creation of


library at the user level. Kernel threads.

3 User-level thread is generic and Kernel-level thread is specific to the


can run on any operating system. operating system.

4 Multi-threaded applications Kernel routines themselves can be


cannot take advantage of multithreaded.
multiprocessing.

User Level Threads


In this case, the thread management kernel is not aware of the existence of
threads. The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and
for saving and restoring thread contexts. The application starts with a single
thread.

Advantages

 Thread switching does not require Kernel mode privileges.


 User level thread can run on any operating system.
 Scheduling can be application specific in the user level thread.
 User level threads are fast to create and manage.

Disadvantages

 In a typical operating system, most system calls are blocking.


 Multithreaded application cannot take advantage of multiprocessing.

Kernel Level Threads


In this case, thread management is done by the Kernel. There is no thread
management code in the application area. Kernel threads are supported directly by
the operating system. Any application can be programmed to be multithreaded.
All of the threads within an application are supported within a single process.
The Kernel maintains context information for the process as a whole and for
individual’s threads within the process. Scheduling by the Kernel is done on a
thread basis. The Kernel performs thread creation, scheduling and management in
Kernel space. Kernel threads are generally slower to create and manage than the
user threads.

Advantages

 Kernel can simultaneously schedule multiple threads from the same process
on multiple processes.
 If one thread in a process is blocked, the Kernel can schedule another thread
of the same process.
 Kernel routines themselves can be multithreaded.

Disadvantages

 Kernel threads are generally slower to create and manage than the user
threads.
 Transfer of control from one thread to another within the same process
requires a mode switch to the Kernel.

You might also like