unit 3 first half
unit 3 first half
Process State
• Process state. The state may be new, ready, running, waiting, halted, and so on.
• Program counter. The counter indicates the address of the next instruction to be
executed for this process.
• CPU registers. The registers vary in number and type, depending on the
computer architecture. They include accumulators, index registers, stack pointers,
and general-purpose registers, plus any condition-code information. Along with the
program counter, this state information must be saved when an interrupt occurs, to
allow the process to be continued correctly afterward .
• Cpu-scheduling information. This information includes a process priority,
pointers to scheduling queues, and any other scheduling parameters.
• Memory-management information. This information may include such
information as the value of the base and limit registers, the page tables, or the
segment tables, depending on the memory system used by the operating system.
• Accounting information. This information includes the amount of CPU and real
time used, time limits, account numbers, job or process numbers, and so on.
• I/O status information. This information includes the list of I/O devices
allocated to the process, a list of open files, and so on. In brief, the PCB simply
serves as the repository for any information that may vary from process to process.
Process Scheduling
Scheduling Queues
As processes enter the system, they are put into a job queue, which consists of all
processes in the system. The processes that are residing in main memory and are
ready and waiting to execute are kept on a list called the ready queue. This queue is
generally stored as a linked list. A ready-queue header contains pointers to the first
and final PCBs in the list. Each PCB includes a pointer field that points to the next
PCB in the ready queue.
The system also includes other queues. When a process is allocated the CPU, it
executes for a while and eventually quits, is interrupted, or waits for the occurrence
of a particular event, such as the completion of an I/O request.
Suppose the process makes an I/O request to a shared device, such as a disk. Since
there are many processes in the system, the disk may be busy with the I/O request
of some other process. The process therefore may have to wait for the disk. The list
of processes waiting for a particular I/O device is called a device queue. Each
device has its own device queue.
Schedulers
A process migrates among the various scheduling queues throughout its lifetime.
The operating system must select, for scheduling purposes, processes from these
queues in some fashion. The selection process is carried out by the appropriate
scheduler.
The long-term scheduler, or job scheduler, selects processes from this pool and
loads them into memory for execution.
The short-term scheduler, or CPU scheduler, selects from among the processes
that are ready to execute and allocates the CPU to one of them.
Context Switch
interrupts cause the operating system to change a CPU from its current task
and to run a kernel routine. Such operations happen frequently on general-
purpose systems.
When an interrupt occurs, the system needs to save the current context of the
process currently running on the CPU so that it can restore that context when
its processing is done, essentially suspending the process and then resuming
it.
The context is represented in the PCB of the process; it includes the value of
the CPU registers, the process state and memory-management information.
Generically, we perform a state save of the current state of the CPU, be it in
kernel or user mode, and then a state restore to resume operations.
Switching the CPU to another process requires performing a state save of the
current process and a state restore of a different process. This task is known
as a context switch. When a context switch occurs, the kernel saves the
context of the old process in its PCB and loads the saved context of the new
process scheduled to run. Context-switch time is pure overhead, because the
system does no useful work while switching. Its speed varies from machine
to machine, depending on the memory speed, the number of registers that
must be copied, and the existence of special instructions (such as a single
instruction to load or store all registers). Typical speeds are a few
milliseconds.
Context-switch times are highly dependent on hardware support. For
instance, some processors (such as the Sun UltraSPARC) provide multiple
sets of registers. A context switch here simply requires changing the pointer
to the current register set. Of course, if there are more active processes than
there are register sets, the system resorts to copying register data to and from
memory, as before. Also, the more complex the operating system, the more
work must be done during a context switch. advanced memory-management
techniques may require extra data to be switched with each context. For
instance, the address space of the current process must be preserved as the
space of the next task is prepared for use. How the address space is
preserved, and what amount of work is needed to preserve it, depend on the
memory-management method of the operating system.
scheduling criteria
Various criteria or characteristics that help in designing a good scheduling
algorithm are:
CPU Utilization − A scheduling algorithm should be designed so that CPU
remains busy as possible. It should make efficient use of CPU.
Throughput − Throughput is the amount of work completed in a unit of
time. In other words throughput is the processes executed to number of jobs
completed in a unit of time. The scheduling algorithm must look to
maximize the number of jobs processed per time unit.
Response time − Response time is the time taken to start responding to the
request. A scheduler must aim to minimize response time for interactive
users.
Turnaround time − Turnaround time refers to the time between the
moment of submission of a job/ process and the time of its completion.
Thus how long it takes to execute a process is also an important factor.
Waiting time − It is the time a job waits for resource allocation when
several jobs are competing in multiprogramming system. The aim is to
minimize the waiting time.
Fairness − A good scheduler should make sure that each process gets its
fair share of the CPU.
Thread
A thread is a flow of execution through the process code, with its own
program counter that keeps track of which instruction to execute next,
system registers which hold its current working variables, and a stack which
contains the execution history.
A thread shares with its peer threads few information like code segment,
data segment and open files. When one thread alters a code segment
memory item, all other threads see that.
A thread is also called a lightweight process. Threads provide a way to
improve application performance through parallelism. Threads represent a
software approach to improving performance of operating system by
reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside
a process. Each thread represents a separate flow of control. Threads have
been successfully used in implementing network servers and web server.
They also provide a suitable foundation for parallel execution of
applications on shared memory multiprocessors. The following figure
shows the working of a single-threaded and a multithreaded process.
3 In multiple processing environments, All threads can share same set of open
each process executes the same code files, child processes.
but has its own memory and file
resources.
6 In multiple processes each process One thread can read, write or change
operates independently of the others. another thread's data.
Advantages of Thread
Advantages
Disadvantages
Advantages
Kernel can simultaneously schedule multiple threads from the same process
on multiple processes.
If one thread in a process is blocked, the Kernel can schedule another thread
of the same process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user
threads.
Transfer of control from one thread to another within the same process
requires a mode switch to the Kernel.