Bca IV Sem Os -i Unit Notes
Bca IV Sem Os -i Unit Notes
An Operating System (OS) is an interface between a Computer user and Computer hardware. An operating
system is a software, which performs all the basic tasks like file management, memory management, process
management, handling input and output, and controlling peripheral devices such as disk drives and printers.
An operating system is software that enables applications to interact with a computer's hardware. The
software that contains the core components of the operating system is called the kernel.
The primary purposes of an Operating System are to enable applications (software) to interact with a
computer's hardware and to manage a system's hardware and software resources.
Some popular Operating Systems include Linux Operating System, Windows Operating System, VMS,
OS/400, AIX, z/OS, etc. Today, Operating system is found almost in every device like mobile phones,
personal computers, mainframe computers, automobiles, TV, Toys etc.
Definitions
An operating system is a program that acts as an interface between the user and the computer hardware and
controls the execution of all kinds of programs.
An Operating System is the low-level software that supports a computer's basic functions, such as scheduling
tasks and controlling peripherals.
An operating system (OS) is system software that manages computer hardware, software resources, and
provides common services for computer programs.
Operating System Architecture
We can draw a generic architecture diagram of an Operating System which is as follows:
1
Objectives of Operating System: The objectives of the operating system are −
• To make the computer system convenient to use in an efficient manner.
• To hide the details of the hardware resources from the users.
• To provide users a convenient interface to use the computer system.
• To act as an intermediary between the hardware and its users, making it easier for the users to access
and use other resources.
• To manage the resources of a computer system.
• To keep track of who is using which resource, granting resource requests, and mediating conflicting
requests from different programs and users.
• To provide efficient and fair sharing of resources among users and programs.
Characteristics of Operating System
The most prominent characteristic features of Operating Systems are:
• Memory Management − Keeps track of the primary memory, i.e. what part of it is in use by whom,
what part is not in use, etc. and allocates the memory when a process or program requests it.
• Processor Management − Allocates the processor (CPU) to a process and deallocates the processor
when it is no longer required.
• Device Management − Keeps track of all the devices. This is also called I/O controller that decides
which process gets the device, when, and for how much time.
• File Management − Allocates and de-allocates the resources and decides who gets the resources.
2
• Security − Prevents unauthorized access to programs and data by means of passwords and other similar
techniques.
• Job Accounting − Keeps track of time and resources used by various jobs and/or users.
• Control over System Performance − Records delays between the request for a service and from the
system.
• Interaction with the Operators − Interaction may take place via the console of the computer in the
form of instructions. The Operating System acknowledges the same, does the corresponding action, and
informs the operation by a display screen.
• Error-detecting Aids − Production of dumps, traces, error messages, and other debugging and error-
detecting methods.
• Coordination between other Software and Users − Coordination and assignment of compilers,
interpreters, assemblers, and other software to the various users of the computer systems.
• Reliability: It is very reliable because no any virus and harmful code can be detected in it.
• Easy to use: It can be easily used as it also has a GUI interface.
3
to follow a number of steps to execute a program. Programming language like FORTRAN was developed by
John W. Backus in 1956.
Second Generation (1956-1964)
The second generation of computer hardware was most notably characterized by transistors replacing vacuum
tubes as the hardware component technology. The first operating system GMOS was developed by the IBM
computer. GMOS was based on single stream batch processing system, because it collects all similar jobs in
groups or batches and then submits the jobs to the operating system using a punch card to complete all jobs in
a machine. Operating system is cleaned after completing one job and then continues to read and initiates the
next job in punch card.
Researchers began to experiment with multiprogramming and multiprocessing in their computing services
called the time-sharing system. A noteworthy example is the Compatible Time Sharing System (CTSS),
developed at MIT during the early 1960s.
Third Generation (1964-1979)
The third generation officially began in April 1964 with IBM’s announcement of its System/360 family of
computers. Hardware technology began to use integrated circuits (ICs) which yielded significant advantages
in both speed and economy.
Operating system development continued with the introduction and widespread adoption of
multiprogramming. The idea of taking fuller advantage of the computer’s data channel I/O capabilities
continued to develop. Another progress which leads to developing of personal computers in fourth generation
is a new development of minicomputers with DEC PDP-1. The third generation was an exciting time, indeed,
for the development of both computer hardware and the accompanying operating system.
Fourth Generation (1979 – Present)
The fourth generation is characterized by the appearance of the personal computer and the workstation. The
component technology of the third generation was replaced by very large scale integration (VLSI). Many
Operating Systems which we are using today like Windows, Linux, Mac OS, etc developed in the fourth
generation.
Following are some of important functions of an operating System.
1. Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main memory is a large
array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be executed,
it must in the main memory. An Operating System does the following activities for memory management −
• Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.
• In multiprogramming, the OS decides which process will get memory when and how much.
4
• Allocates the memory when a process requests it to do so.
• De-allocates the memory when a process no longer needs it or has been terminated.
2. Processor Management
In multiprogramming environment, the OS decides which process gets the processor when and for how much
time. This function is called process scheduling. An Operating System does the following activities for
processor management −
• Keeps tracks of processor and status of process. The program responsible for this task is known
as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when a process is no longer required.
3. Device Management
An Operating System manages device communication via their respective drivers. It does the following
activities for device management −
• Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
• Decides which process gets the device when and for how much time.
• Allocates the device in the efficient way.
• De-allocates devices.
4. File Management
A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions.It handles all the files related activities such as the protection of files,
retrieval, naming, sharing, organization, and storage.
An Operating System does the following activities for file management −
• Keeps track of information, location, uses, status etc. The collective facilities are often known as file
system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.
5. Network Management:
A distributed system is a group of processors that do not exchange share memory, hardware devices, or a clock.
The process exchanges information with another through the network.
6. Security − By means of password and similar other techniques, it prevents unauthorized access to programs
and data.This module protects the data and information of a computer system against the malware threat
and authorized access.
5
7. Control over system performance − Recording delays between request for a service and response from
the system.
8. Job accounting − Keeping track of time and resources used by various jobs and users.
9. Error detecting aids − Production of dumps, traces, error messages, and other debugging and error
detecting aids.
10. Coordination between other software and users − Coordination and assignment of compilers,
interpreters, assemblers and other software to the various users of the computer systems.
11. Secondary Storage Management: the levels of storage in the System include cache storage, primary
storage, and secondary storage. Instructions are stored in the primary storage or cache so that a running
program can reference them..
The purpose of this operating system was mainly to transfer control from one job to another as soon as the job
was completed. It contained a small set of programs called the resident monitor that always resided in one part
of the main memory. The remaining part is used for servicing jobs.
6
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time between two
jobs.
Disadvantages of Batch OS
1. Starvation
Batch processing suffers from starvation.
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very high, then the
other four jobs will never be executed, or they will have to wait for a very long time. Hence the other processes
get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires the input of two
numbers from the console, then it will never get it in the batch processing scenario since the user is not present
at the time of execution.
7
2. Multiprogramming Operating System
Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each process needs
two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its Input/Output, the CPU can start the execution of
other processes. Therefore, multiprogramming improves the efficiency of the system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems resources are used
efficiently, but they do not provide any user interaction with the computer system.
8
In Multiprocessing, Parallel computing is achieved. More than one processor present in the system can execute
more than one process simultaneously, which will increase the throughput of the system.
Advantages of Multiprocessing operating system
o Increased reliability: Due to the multiprocessing system, processing tasks can be distributed among
several processors. This increases reliability as if one processor fails, the task can be given to another
processor for completion.
o Increased throughput: As several processors increase, more work can be done in less.
Disadvantages of Multiprocessing operating System
o Multiprocessing operating system is more complex and sophisticated as it takes care of multiple CPUs
simultaneously.
9
An Operating system, which includes software and associated protocols to communicate with other computers
via a network conveniently and cost-effectively, is called Network Operating System.
Advantages of Network Operating System
o In this type of operating system, network traffic reduces due to the division between clients and the
server.
o This type of system is less expensive to set up and maintain.
Disadvantages of Network Operating System
o In this type of operating system, the failure of any node in a system affects the whole system.
o Security and performance are important issues. So trained network administrators are required for
network administration.
10
The Application of a Real-Time system exists in the case of military applications, if you want to drop a missile,
then the missile is supposed to be dropped with a certain precision.
11
A time-sharing operating system allows many users to be served simultaneously, so sophisticated CPU
scheduling schemes and Input/output management are required.
Time-sharing operating systems are very difficult and expensive to build.
Advantages of Time Sharing Operating System
o The time-sharing operating system provides effective utilization and sharing of resources.
o This system reduces CPU idle and response time.
Disadvantages of Time Sharing Operating System
o Data transmission rates are very high in comparison to other methods.
o Security and integrity of user programs loaded in memory and data need to be maintained as many users
access the system at the same time.
12
System Calls in Operating System (OS)
A system call is a way for a user program to interface with the operating system. The program requests several
services, and the OS responds by invoking a series of system calls to satisfy the request. A system call can be
written in assembly language or a high-level language like C or Pascal. System calls are predefined functions
that the operating system may directly invoke if a high-level language is used.
A system call is a method for a computer program to request a service from the kernel of the operating system on
which it is running. A system call is a method of interacting with the operating system via programs. A system
call is a request from computer software to an operating system's kernel.
The Application Program Interface (API) connects the operating system's functions to user programs. It acts
as a link between the operating system and a process, allowing user-level programs to request operating system
services. The kernel system can only be accessed using system calls. System calls are required for any programs
that use resources.
How system calls are made?
When a computer software needs to access the operating system's kernel, it makes a system call. The system
call uses an API to expose the operating system's services to user programs. It is the only method to access the
kernel system. All programs or processes that require resources for execution must use system calls, as they
serve as an interface between the operating system and user programs.
Below are some examples of how a system call varies from a user function.
1. A system call function may create and use kernel processes to execute the asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system call with kernel-mode privilege
executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that are not present in the kernel
protection domain.
4. The code and data for system calls are stored in global kernel memory.
Why do you need system calls in Operating System?
There are various situations where you must require system calls in the operating system. Following of the
situations are as follows:
1. It is must require when a file system wants to create or delete a file.
2. Network connections require the system calls to sending and receiving data packets.
3. If you want to read or write a file, you need to system calls.
4. If you want to access hardware devices, including a printer, scanner, you need a system call.
5. System calls are used to create and manage new processes.
13
How System Calls Work
The Applications run in an area of memory known as user space. A system call connects to the operating
system's kernel, which executes in kernel space. When an application creates a system call, it must first obtain
permission from the kernel. It achieves this using an interrupt request, which pauses the current process and
transfers control to the kernel.
If the request is permitted, the kernel performs the requested action, like creating or deleting a file. As input,
the application receives the kernel's output. The application resumes the procedure after the input is received.
When the operation is finished, the kernel returns the results to the application and then moves data from kernel
space to user space in memory.
A simple system call may take few nanoseconds to provide the result, like retrieving the system date and time.
A more complicated system call, such as connecting to a network device, may take a few seconds. Most
operating systems launch a distinct kernel thread for each system call to avoid bottlenecks. Modern operating
systems are multi-threaded, which means they can handle various system calls at the same time.
Types of System Calls
There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Process Control
Process control is the system call that is used to direct the processes. Some process control examples include
creating, load, abort, end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management examples include
creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of device management
include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are some examples of
information maintenance, including getting system data, set time or date, get time or date, set system data, etc.
14
Communication
Communication is a system call that is used for communication. There are some examples of communication,
including create, delete communication connections, send, receive messages, etc.
15
open()
The open() system call allows you to access a file on a file system. It allocates resources to the file and provides
a handle that the process may refer to. Many processes can open a file at once or by a single process only. It's
all based on the file system and structure.
read()
It is used to obtain data from a file on the file system. It accepts three arguments in general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it using open() before reading.
wait()
In some systems, a process may have to wait for another process to complete its execution before proceeding.
When a parent process makes a child process, the parent process execution is suspended until the child process
is finished. The wait() system call is used to suspend the parent process. Once the child process has completed
its execution, control is returned to the parent process.
write()
It is used to write data from a user buffer to a device like a file. This system call is one way for a program to
generate data. It takes three arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is one of the most common ways to
create processes in operating systems. When a parent process spawns a child process, execution of the parent
process is interrupted until the child process completes. Once the child process has completed its execution,
control is returned to the parent process.
close()
It is used to end file system access. When this system call is invoked, it signifies that the program no longer
requires the file, and the buffers are flushed, the file information is altered, and the file resources are de-allocated
as a result.
exec()
When an executable file replaces an earlier executable file in an already executing process, this system function
is invoked. As a new process is not built, the old process identification stays, but the new process replaces data,
stack, data, head, etc.
16
exit()
The exit() is a system call that is used to end program execution. This call indicates that the thread execution is
complete, which is especially useful in multi-threaded environments. The operating system reclaims resources
spent by the process following the use of the exit() system function.
Process Management in OS
A Program does nothing unless its instructions are executed by a CPU. A program in execution is called a
process. In order to accomplish its task, process needs the computer resources.
There may exist more than one process in the system which may require the same resource at the same time.
Therefore, the operating system has to manage all the processes and the resources in a convenient and efficient
way.
Some resources may need to be executed by one process at one time to maintain the consistency otherwise the
system can become inconsistent and deadlock may occur.
The operating system is responsible for the following activities in connection with Process Management
1. Scheduling processes and threads on the CPUs.
2. Creating and deleting both user and system processes.
3. Suspending and resuming processes.
4. Providing mechanisms for process synchronization.
5. Providing mechanisms for process communication.
Attributes of a process
The Attributes of the process are used by the Operating System to create the process control block (PCB) for
each of them. This is also called context of the process. Attributes which are stored in the PCB are described
below.
1. Process ID
When a process is created, a unique id is assigned to the process which is used for unique identification of the
process in the system.
2. Program counter
A program counter stores the address of the last instruction of the process on which the process was suspended.
The CPU uses this address when the execution of this process is resumed.
3. Process State
The Process, from its creation to the completion, goes through various states which are new, ready, running and
waiting. We will discuss about them later in detail.
4. Priority
17
Every process has its own priority. The process with the highest priority among the processes gets the CPU
first. This is also stored on the process control block.
5. General Purpose Registers
Every process has its own set of registers which are used to hold the data which is generated during the execution
of the process.
6. List of open files
During the Execution, Every process uses some files which need to be present in the main memory. OS also
maintains a list of open files in the PCB.
7. List of open devices
OS also maintain the list of all open devices which are used during the execution of the process.
Process States
State Diagram
18
The process, from its creation to completion, passes through various states. The minimum number of states is
five.
The names of the states are not standardized although the process may be in one of the following states during
execution.
1. New
A program which is going to be picked up by the OS into the main memory is called a new process.
2. Ready
Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU to be assigned.
The OS picks the new processes from the secondary memory and put all of them in the main memory.
The processes which are ready for the execution and reside in the main memory are called ready state processes.
There can be many processes present in the ready state.
3. Running
19
One of the processes from the ready state will be chosen by the OS depending upon the scheduling algorithm.
Hence, if we have only one CPU in our system, the number of running processes for a particular time will
always be one. If we have n processors in the system then we can have n processes running simultaneously.
4. Block or wait
From the Running state, a process can make the transition to the block or wait state depending upon the
scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then the OS move this
process to the block or wait state and assigns the CPU to the other processes.
5. Completion or termination
When a process finishes its execution, it comes in the termination state. All the context of the process (Process
Control Block) will also be deleted the process will be terminated by the Operating system.
6. Suspend ready
A process in the ready state, which is moved to secondary memory from the main memory due to lack of the
resources (mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS have to make the
room for the process in the main memory by throwing the lower priority process out into the secondary memory.
The suspend ready processes remain in the secondary memory until the main memory gets available.
7. Suspend wait
Instead of removing the process from the ready queue, it's better to remove the blocked process which is waiting
for some resources in the main memory. Since it is already waiting for some resource to get available hence it
is better if it waits in the secondary memory and make room for the higher priority process. These processes
complete their execution once the main memory gets available and their wait is finished.
Schedulers
Process Scheduling in OS (Operating System)
Operating system uses various schedulers for the process scheduling described below.
1. Long term scheduler
Long term scheduler is also known as job scheduler. It chooses the processes from the pool (secondary
memory) and keeps them in the ready queue maintained in the primary memory.
Long Term scheduler mainly controls the degree of Multiprogramming. The purpose of long term scheduler
is to choose a perfect mix of IO bound and CPU bound processes among the jobs present in the pool.
If the job scheduler chooses more IO bound processes then all of the jobs may reside in the blocked state all
the time and the CPU will remain idle most of the time. This will reduce the degree of Multiprogramming.
Therefore, the Job of long term scheduler is very critical and may affect the system for a very long time.
21
Process Queues
The Operating system manages various types of queues for each of the process states. The PCB related to the
process is also stored in the queue of the same state. If the Process is moved from one state to another state then
its PCB is also unlinked from the corresponding queue and added to the other state queue in which the transition
is made.
22
The total amount of time required by the CPU to execute the whole process is called the Burst Time. This does
not include the waiting time. It is confusing to calculate the execution time for a process even before executing
it hence the scheduling problems based on the burst time cannot be implemented in reality.
3. Completion Time
The Time at which the process enters into the completion state or the time at which the process completes its
execution, is called completion time.
4. Turnaround time
The total amount of time spent by the process from its arrival to its completion, is called Turnaround time.
5. Waiting Time
The Total amount of time for which the process waits for the CPU to be assigned is called waiting time.
6. Response Time
The difference between the arrival time and the time at which the process first gets the CPU is called Response
Time.
CPU Scheduling
In the uniprogrammming systems like MS DOS, when a process waits for any I/O operation to be done, the
CPU remains idol. This is an overhead since it wastes the time and causes the problem of starvation. However,
In Multiprogramming systems, the CPU doesn't remain idle during the waiting time of the Process and it starts
executing other processes. Operating System has to define which process the CPU will be given.
In Multiprogramming systems, the Operating system schedules the processes on the CPU to have the
maximum utilization of it and this procedure is called CPU scheduling. The Operating System uses various
scheduling algorithm to schedule the processes.
This is a task of the short term scheduler to schedule the CPU for the number of processes present in the Job
Pool. Whenever the running process requests some IO operation then the short term scheduler saves the current
context of the process (also called PCB) and changes its state from running to waiting. During the time, process
is in waiting state; the Short term scheduler picks another process from the ready queue and assigns the CPU
to this process. This procedure is called context switching.
23
Why do we need Scheduling?
In Multiprogramming, if the long term scheduler picks more I/O bound processes then most of the time, the
CPU remains idol. The task of Operating system is to optimize the utilization of resources.
If most of the running processes change their state from running to waiting then there may always be a
possibility of deadlock in the system. Hence to reduce this overhead, the OS needs to schedule the jobs to get
the optimal utilization of CPU and to avoid the possibility to deadlock.
24
It is the simplest algorithm to implement. The process with the minimal arrival time will get the CPU first. The
lesser the arrival time, the sooner will the process gets the CPU. It is the non-preemptive type of scheduling.
2. Round Robin
In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All the processes will get
executed in the cyclic way. Each of the process will get the CPU for a small amount of time (called time
quantum) and then get back to the ready queue to wait for its next turn. It is a preemptive type of scheduling.
3. Shortest Job First
The job with the shortest burst time will get the CPU first. The lesser the burst time, the sooner will the process
get the CPU. It is the non-preemptive type of scheduling.
4. Shortest remaining time first
It is the preemptive form of SJF. In this algorithm, the OS schedules the Job according to the remaining time
of the execution.
5. Priority based scheduling
In this algorithm, the priority will be assigned to each of the processes. The higher the priority, the sooner will
the process get the CPU. If the priority of the two processes is same then they will be scheduled according to
their arrival time.
6. Highest Response Ratio Next
In this scheduling Algorithm, the process with highest response ratio will be scheduled next. This reduces the
starvation in the system.
25
A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling
algorithms. There are six popular process scheduling algorithms−
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Priority Scheduling
• Shortest Remaining Time
• Round Robin(RR) Scheduling
• Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so that
once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas the
preemptive scheduling is based on priority where a scheduler may preempt a low priority running process
anytime when a high priority process enters into a ready state.
P0 0-0=0
P1 5-1=4
27
P2 8-2=6
P3 16 - 3 = 13
P0 0 5 0
P1 1 3 5
P2 2 8 14
28
P3 3 6 8
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
29
Process Arrival Time Execution Time Priority Service Time
P0 0 5 1 0
P1 1 3 2 11
P2 2 8 1 14
P3 3 6 3 5
P0 0-0=0
P1 11 - 1 = 10
P2 14 - 2 = 12
P3 5-3=2
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P3 (9 - 3) + (17 - 12) = 11
31
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue. The
Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on the
algorithm assigned to the queue.
32
2. Progress: Progress means that if one process doesn't need to execute into critical section then it should
not stop other processes to get into the critical section.
Secondary
1. Bounded Waiting: We should be able to predict the waiting time for every process to get into the
critical section. The process must not be endlessly waiting for getting into the critical section.
2. Architectural Neutrality: Our mechanism must be architectural natural. It means that if our solution
is working fine on one architecture then it should also run on the other ones as well.
33
There may be a state where one or more processes try to enter the critical state. After multiple processes enter
the Critical Section, the second process try to access variable which already accessed by the first process.
Explanation
Suppose there is a variable which is also known as shared variable. Let us define that shared variable.
Here, x is the shared variable.
1. int x = 10;
Process 1
1. // Process 1
2. int s = 10;
3. int u = 20;
4. x = s + u;
Process 2
1. // Process 2
2. int s = 10;
3. int u = 20;
4. x = s - u;
If the process is accessed the x shared variable one after other, then we are going to be in a good position.
If Process 1 is alone executed, then the value of x is denoted as x = 30;
The shared variable x changes to 30 from 10
If Process 2 is alone executed, then the value of x is denoted as x = -10;
The shared variable x changes to -10 from 30
If both the processes occur at the same time, then the compiler would be in a confusion to choose which variable
value i.e. -10 or 30. This state faced by the variable x is Data Inconsistency. These problems can also be solved
by Hardware Locks
To, prevent such kind of problems can also be solved by Hardware solutions named Semaphores.
Semaphores
The Semaphore is just a normal integer. The Semaphore cannot be negative. The least value for a Semaphore
is zero (0). The Maximum value of a Semaphore can be anything. The Semaphores usually have two operations.
The two operations have the capability to decide the values of the semaphores.
The two Semaphore Operations are:
1. Wait ( )
2. Signal ( )
Wait Semaphore Operation
34
The Wait Operation is used for deciding the condition for the process to enter the critical state or wait for
execution of process. Here, the wait operation has many different names. The different names are:
1. Sleep Operation
2. Down Operation
3. Decrease Operation
4. P Function (most important alias name for wait operation)
The Wait Operation works on the basis of Semaphore or Mutex Value.
Here, if the Semaphore value is greater than zero or positive then the Process can enter the Critical Section
Area.
If the Semaphore value is equal to zero then the Process has to wait for the Process to exit the Critical Section
Area.
This function is only present until the process enters the critical state. If the Processes enters the critical state,
then the P Function or Wait Operation has no job to do.
If the Process exits the Critical Section we have to reduce the value of Semaphore
Basic Algorithm of P Function or Wait Operation
1. P (Semaphore value)
2. {
3. Allow the process to enter if the value of Semaphore is greater than zero or positive.
4. Do not allow the process if the value of Semaphore is less than zero or zero.
5. Decrement the Semaphore value if the Process leaves the Critical State.
6. }
Signal Semaphore Operation
The Signal Semaphore Operation is used to update the value of Semaphore. The Semaphore value is updated
when the new processes are ready to enter the Critical Section.
The Signal Operation is also known as:
1. Wake up Operation
2. Up Operation
3. Increase Operation
4. V Function (most important alias name for signal operation)
We know that the semaphore value is decreased by one in the wait operation when the process left the critical
state. So, to counter balance the decreased number 1 we use signal operation which increments the semaphore
value. This induces the critical section to receive more and more processes into it.
35
The most important part is that this Signal Operation or V Function is executed only when the process comes
out of the critical section. The value of semaphore cannot be incremented before the exit of process from the
critical section
Basic Algorithm of V Function or Signal Operation
1. V (Semaphore value)
2. {
3. If the process goes out of the critical section then add 1 to the semaphore value
4. Else keep calm until process exits
5. }
Types of Semaphores
There are two types of Semaphores:
1. Binary Semaphore
Here, there are only two values of Semaphore in Binary Semaphore Concept. The two values are 1 and 0.
If the Value of Binary Semaphore is 1, then the process has the capability to enter the critical section area. If
the value of Binary Semaphore is 0 then the process does not have the capability to enter the critical section
area.
2. Counting Semaphore
Here, there are two sets of values of Semaphore in Counting Semaphore Concept. The two types of values are
values greater than and equal to one and other type is value equal to zero.
If the Value of Binary Semaphore is greater than or equal to 1, then the process has the capability to enter the
critical section area. If the value of Binary Semaphore is 0 then the process does not have the capability to enter
the critical section area.
Advantages of a Semaphore
o Semaphores are machine independent since their implementation and codes are written in the
microkernel's machine independent code area.
o They strictly enforce mutual exclusion and let processes enter the crucial part one at a time (only in the
case of binary semaphores).
o With the use of semaphores, no resources are lost due to busy waiting since we do not need any processor
time to verify that a condition is met before allowing a process access to the crucial area.
o Semaphores have the very good management of resources
o They forbid several processes from entering the crucial area. They are significantly more effective than
other synchronization approaches since mutual exclusion is made possible in this way.
Disadvantages of a Semaphore
36
o Due to the employment of semaphores, it is possible for high priority processes to reach the vital area
before low priority processes.
o Because semaphores are a little complex, it is important to design the wait and signal actions in a way
that avoids deadlocks.
o Programming a semaphore is very challenging, and there is a danger that mutual exclusion won't be
achieved.
o The wait ( ) and signal ( ) actions must be carried out in the appropriate order to prevent deadlocks.
Inter Process Communication
In general, Inter Process Communication is a type of mechanism usually provided by the operating system (or
OS). The main aim or goal of this mechanism is to provide communications in between several processes. In
short, the intercommunication allows a process letting another process know that some event has occurred.
"Inter-process communication is used for exchanging useful information between numerous threads in one or
more processes (or programs)."
To understand inter process communication, you can consider the following given diagram that illustrates the
importance of inter-process communication:
38
2. Shared Memory
3. Message Queue
4. Direct Communication
5. Indirect communication
6. Message Passing
7. FIFO
Pipe:-
The pipe is a type of data channel that is unidirectional in nature. It means that the data in this type of data
channel can be moved in only a single direction at a time. Still, one can use two-channel of this type, so that he
can able to send and receive data in two processes. Typically, it uses the standard methods for input and output.
These pipes are used in all types of POSIX systems and in different versions of window operating systems as
well.
Shared Memory:-
It can be referred to as a type of memory that can be used or accessed by multiple processes simultaneously. It
is primarily used so that the processes can communicate with each other. Therefore the shared memory is used
by almost all POSIX and Windows operating systems as well.
Message Queue:-
In general, several different messages are allowed to read and write the data to the message queue. In the
message queue, the messages are stored or stay in the queue unless their recipients retrieve them. In short, we
can also say that the message queue is very helpful in inter-process communication and used by all operating
systems.
To understand the concept of Message queue and Shared memory in more detail, let's take a look at its diagram
given below:
Message Passing:-
39
It is a type of mechanism that allows processes to synchronize and communicate with each other. However, by
using the message passing, the processes can communicate with each other without restoring the hared
variables.
Usually, the inter-process communication mechanism provides two operations that are as follows:
o send (message)
o received (message)
Direct Communication:-
In this type of communication process, usually, a link is created or established between two communicating
processes. However, in every pair of communicating processes, only one link can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share a common mailbox, and each
pair of these processes shares multiple communication links. These shared links can be unidirectional or bi-
directional.
FIFO:-
It is a type of general communication between two unrelated processes. It can also be considered as full-duplex,
which means that one process can communicate with another process and vice versa.
Some other different approaches
o Socket:-It acts as a type of endpoint for receiving or sending the data in a network. It is correct for data
sent between processes on the same computer or data sent between different computers on the same
network. Hence, it used by several types of operating systems.
o File:-A file is a type of data record or a document stored on the disk and can be acquired on demand by
the file server. Another most important thing is that several processes can access that file as required or
needed.
o Signal:-As its name implies, they are a type of signal used in inter process communication in a minimal
way. Typically, they are the massages of systems that are sent by one process to another. Therefore,
they are not used for sending data but for remote commands between multiple processes.
Usually, they are not used to send the data but to remote commands in between several processes.
Why we need interprocess communication?
There are numerous reasons to use inter-process communication for sharing the data. Here are some of the most
important reasons that are given below:
o It helps to speedup modularity
o Computational
o Privilege separation
o Convenience
40
o Helps operating system to communicate with each other and synchronize their actions as well.
41