Cosc 0120 Lec Iv-Vi
Cosc 0120 Lec Iv-Vi
Program
A program is a piece of code which may be a single line or millions of lines.
A computer program is usually written by a computer programmer in a
programming language. For example, here is a simple program written in C
programming language −
#include <stdio.h>
int main() {
return 0;
In general, a process can have one of the following five states at a time.
1
Start
3
Running
4
Waiting
Process moves into the waiting state if it needs to wait for a resource, such
as waiting for user input, or waiting for a file to become available.
5
Terminated or Exit
1
Process State
The current state of the process i.e., whether it is ready, running, waiting,
or whatever.
2
Process privileges
3
Process ID
4
Pointer
5
Program Counter
6
CPU registers
Various CPU registers where process need to be stored for execution for
running state.
7
CPU Scheduling Information
8
Memory management information
This includes the information of page table, memory limits, Segment table
depending on memory used by the operating system.
9
Accounting information
This includes the amount of CPU used for process execution, time limits,
execution ID etc.
10
IO status information
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
Device queues − The processes which are blocked due to unavailability of an
I/O device constitute this queue.
The OS can use different policies to manage each queue (FIFO, Round
Robin, Priority, etc.). The OS scheduler determines how to move processes
between the ready and run queues which can only have one entry per
processor core on the system; in the above diagram, it has been merged
with the CPU.
1
Running
When a new process is created, it enters into the system as in the running
state.
2
Not Running
Processes that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process.
Queue is implemented by using linked list. Use of dispatcher is as follows.
When a process is interrupted, that process is transferred in the waiting
queue. If the process has completed or aborted, the process is discarded.
In either case, the dispatcher then selects a process from the queue to
execute.
Schedulers
Schedulers are special system software which handle process scheduling in
various ways. Their main task is to select the jobs to be submitted into the
system and to decide which process to run. Schedulers are of three types −
Long-Term Scheduler
Short-Term Scheduler
Medium-Term Scheduler
Context Switch
A context switch is the mechanism to store and restore the state or context
of a CPU in Process Control block so that a process execution can be
resumed from the same point at a later time. Using this technique, a
context switcher enables multiple processes to share a single CPU. Context
switching is an essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to
execute another, the state from the current running process is stored into
the process control block. After this, the state for the process to run next is
loaded from its own PCB and used to set the PC, registers, etc. At that point,
the second process can start executing.
Context switches are computationally intensive since register and memory
state must be saved and restored. To avoid the amount of context switching
time, some hardware systems employ two or more sets of processor
registers. When the process is switched, the following information is stored
for later use.
Program Counter
Scheduling information
Changed State
Accounting information
//end of LEC IV
LEC V
Priority Scheduling
a ready state.
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
The processer should know in advance how much time process will take.
P0 0 5 3
P1 1 3 0
P2 2 8 14
P3 3 6 8
Gantt Chart
Wait time of each process is as follows −
P0 3-0=3
P1 0
P2 14 - 2 = 12
P3 8-3=5
Processes with same priority are executed on first come first served basis.
P0 9-0=9
P1 6-1=5
P2 14 - 2 = 12
P3 0-0=0
It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
Round Robin is the preemptive process scheduling algorithm.
Once a process is executed for a given time period, it is preempted and other
process executes for a given time period.
P0 0 5 3
P1 1 3 0
P2 2 8 14
P3 3 6 8
Q=3
Wait time of each process is as follows −
P0 (0 - 0) + (3 - 3) = 0
P1 (0 - 1) = 0
P3 (8 - 3) =5 + (5 - 3) =2 + (2-3) =0 =7
For example, CPU-bound jobs can be scheduled in one queue and all I/O-
bound jobs in another queue. The Process Scheduler then alternately selects
jobs from each queue and assigns them to the CPU based on the algorithm
assigned to the queue.
LEC VI
A thread shares with its peer threads few information like code segment,
data segment and open files. When one thread alters a code segment
memory item, all other threads see that.
Each thread belongs to exactly one process and no thread can exist outside
a process. Each thread represents a separate flow of control. Threads have
been successfully used in implementing network servers and web server.
They also provide a suitable foundation for parallel execution of applications
on shared memory multiprocessors. The following figure shows the working
of a single-threaded and a multithreaded process.
Difference between Process and Thread
S.N Process Thread
.
4 If one process is blocked, then no other process can execute While one
until the first process is unblocked. thread is
blocked
and
waiting, a
second
thread in
the same
task can
run.
Advantages of Thread
Threads minimize the context switching time.
Efficient communication.
Types of Thread
Threads are implemented in following two ways −
The Kernel maintains context information for the process as a whole and for
individuals threads within the process. Scheduling by the Kernel is done on
a thread basis. The Kernel performs thread creation, scheduling and
management in Kernel space. Kernel threads are generally slower to create
and manage than the user threads.
Advantages
Kernel can simultaneously schedule multiple threads from the same process on
multiple processes.
If one thread in a process is blocked, the Kernel can schedule another thread of
the same process.
Kernel threads are generally slower to create and manage than the user threads.
Transfer of control from one thread to another within the same process requires
a mode switch to the Kernel.
Multithreading Models
Some operating system provide a combined user level thread and Kernel
level thread facility. Solaris is a good example of this combined approach. In
a combined system, multiple threads within the same application can run in
parallel on multiple processors and a blocking system call need not block
the entire process. Multithreading models are three types
//CAT 1
Difference between User-Level & Kernel-Level
Thread
S.N User-Level Threads Kernel-Level Thread
.
This tutorial will teach you basic concepts related to Memory Management.
1
Symbolic addresses
The addresses used in a source code. The variable names, constants, and
instruction labels are the basic elements of the symbolic address space.
2
Relative addresses
3
Physical addresses
The value in the base register is added to every address generated by a user
process, which is treated as offset at the time it is sent to memory. For example,
if the base register value is 10000, then an attempt by the user to use address
location 100 will be dynamically reallocated to location 10100.
The user program deals with virtual addresses; it never sees the real physical
addresses.
If you are writing a Dynamically loaded program, then your compiler will
compile the program and for all the modules which you want to include
dynamically, only references will be provided and rest of the work will be
done at the time of execution.
At the time of loading, with static loading, the absolute program (and
data) is loaded into memory in order for execution to start.
If you are using dynamic loading, dynamic routines of the library are
stored on a disk in relocatable form and are loaded into memory only when
they are needed by the program.
Static vs Dynamic Linking
As explained above, when static linking is used, the linker combines all
other modules needed by a program into a single executable program to
avoid any runtime dependency.
When dynamic linking is used, it is not required to link the actual module or
library with the program, rather a reference to the dynamic module is
provided at the time of compilation and linking. Dynamic Link Libraries (DLL)
in Windows and Shared Objects in Unix are good examples of dynamic
libraries.
Swapping
Swapping is a mechanism in which a process can be swapped temporarily
out of main memory (or move) to secondary storage (disk) and make that
memory available to other processes. At some later time, the system swaps
back the process from the secondary storage to main memory.
Let us assume that the user process is of size 2048KB and on a standard
hard disk where swapping will take place has a data transfer rate around 1
MB per second. The actual transfer of the 1000K process to or from memory
will take
2048KB / 1024KB per second
= 2 seconds
= 2000 milliseconds
Now considering in and out time, it will take complete 4000 milliseconds
plus other overhead where the process competes to regain main memory.
Memory Allocation
Main memory usually has two partitions −
1
Single-partition allocation
2
Multiple-partition allocation
Fragmentation
As processes are loaded and removed from memory, the free memory
space is broken into little pieces. It happens after sometimes that processes
cannot be allocated to memory blocks considering their small size and
memory blocks remains unused. This problem is known as Fragmentation.
Fragmentation is of two types −
1
External fragmentation
2
Internal fragmentation
Paging
A computer can address more memory than the amount physically installed
on the system. This extra memory is actually called virtual memory and it is
a section of a hard that's set up to emulate the computer's RAM. Paging
technique plays an important role in implementing virtual memory.
A data structure called page map table is used to keep track of the
relation between a page of a process to a frame in physical memory.
When the system allocates a frame to any page, it translates this logical
address into a physical address and create entry into the page table to be
used throughout execution of the program.
This process continues during the whole execution of the program where
the OS keeps removing idle pages from the main memory and write them
onto the secondary memory and bring them back when required by the
program.
Segmentation
Segmentation is a memory management technique in which each job is
divided into several segments of different sizes, one for each module that
contains pieces that perform related functions. Each segment is actually a
different logical address space of the program.
The main visible advantage of this scheme is that programs can be larger
than physical memory. Virtual memory serves two purposes. First, it allows
us to extend the use of physical memory by using disk. Second, it allows us
to have memory protection, because each virtual address is translated to a
physical address.
User written error handling routines are used only when an error occurred in the
data or computation.
Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though only a
small amount of the table is actually used.
The ability to execute a program that is only partially in memory would counter
many benefits.
Less number of I/O would be needed to load or swap each user program into
memory.
A program would no longer be constrained by the amount of physical memory
that is available.
Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.
Demand Paging
A demand paging system is quite similar to a paging system with swapping
where processes reside in secondary memory and pages are loaded only on
demand, not in advance. When a context switch occurs, the operating
system does not copy any of the old program’s pages out to the disk or any
of the new program’s pages into the main memory Instead, it just begins
executing the new program after loading the first page and fetches that
program’s pages as they are referenced.
While executing a program, if the program references a page which is not
available in the main memory because it was swapped out a little ago, the
processor treats this invalid memory reference as a page fault and
transfers control from the program to the operating system to demand the
page back into the memory.
Advantages
Following are the advantages of Demand Paging −
When the page that was selected for replacement and was paged out, is
referenced again, it has to read in from disk, and this requires for I/O
completion. This process determines the quality of the page replacement
algorithm: the lesser the time waiting for page-ins, the better is the
algorithm.
Reference String
The string of memory references is called reference string. Reference
strings are generated artificially or by tracing a given system and recording
the address of each memory reference. The latter choice produces a large
number of data, where we note two things.
For a given page size, we need to consider only the page number, not the entire
address.
If we have a reference to a page p, then any immediately following references to
page p will never cause a page fault. Page p will be in memory after the first
reference; the immediately following references will not fault.
For example, consider the following sequence of addresses −
123,215,600,1234,76,96
If page size is 100, then the reference string is 1,2,6,12,0,0
Write the new page in the frame of free pool, mark the page table and restart the
process.
Now write the dirty page out of disk and place the frame holding replaced page
in free pool.
Block devices − A block device is one with which the driver communicates by
sending entire blocks of data. For example, Hard disks, USB cameras, Disk-On-
Key etc.
Character devices − A character device is one with which the driver
communicates by sending and receiving single characters (bytes, octets). For
example, serial ports, parallel ports, sounds cards etc
//END OF LEC VI