0% found this document useful (0 votes)
13 views

unit1_OS

An operating system (OS) serves as an intermediary between users and computer hardware, facilitating convenience, efficiency, and adaptability. It manages memory, processors, devices, files, security, performance, and error detection while evolving through four generations from serial processing to personal computing. The OS structure varies from simple to layered and microkernel designs, with system calls providing essential interfaces for process control, file management, and communication.

Uploaded by

Jaya Shree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

unit1_OS

An operating system (OS) serves as an intermediary between users and computer hardware, facilitating convenience, efficiency, and adaptability. It manages memory, processors, devices, files, security, performance, and error detection while evolving through four generations from serial processing to personal computing. The OS structure varies from simple to layered and microkernel designs, with system calls providing essential interfaces for process control, file management, and communication.

Uploaded by

Jaya Shree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

UNIT I

Define OS:

An operating system is a program that acts as an intermediate part between a user of a


computer and the computer hardware and controls the execution of all kinds of
programs.

Objectives of OS:
1. Convenience: An OS makes a computer more convenient to use.
2. Efficiency: An OS allows the computer system resources to be used in an efficient
manner.
3. Ability to evolve: An OS should be constructed in such a way as to permit the
effective development, testing, and introduction of new system functions without
interfering with service.
Functions of an operating System:
Memory Management: Memory management refers to management of Primary
Memory or Main Memory. An Operating System does the following activities for memory
management: OS Keeps tracks of primary memory, i.e., what part of it are in use by
whom, what part are not in use. In multi-programming, the OS decides which process
will get memory when and how much.OS allocates the memory when a process
requests it to do so. It de-allocates the memory when a process no longer needs it or
has been terminated.
Processor Management: In multi-programming environment, the OS decides which
process gets the processor when and for how much time. This function is called process
scheduling. An Operating System does the following activities for processor
management: OS keeps tracks of processor and status of process.OS allocates the
processor (CPU) to a process. It de-allocates processor when a process is no longer
required.
Device Management: An Operating System manages device communication via their
respective drivers. It does the following activities for device management: Keeps tracks
of all devices. The program responsible for this task is known as the I/O controller.
Decides which process gets the device when and for how much time. OS allocates the
device in the most efficient way.It de-allocates devices in most efficient way.
File Management: A file system is normally organized into directories for easy
navigation and usage. These directories may contain files and other directions.An
Operating System does the following activities for file management: Keeps track of
information, location, uses, status etc. The collective facilities are often known as file
system. OS Decides who gets the resources.It allocates the resources and also de-
allocates the resources when not in need.
Security: OS prevents unauthorized access to programs and data.For shared or public
systems, the OS controls access to the system as a whole and to specific system
resources.
Control over system performance: OS will collect usage statistics for various
resources and monitor performance parameters such as response time, Recording
delays between request for a service and response from the system.
Job accounting: OS Keeps track of time and resources used by various jobs and
users.On any system, this information is useful in anticipating the need for future
enhancements and in tuning the system to improve performance and can be used for
job accounting purposes.
Error detection & Response: A variety of errors can occur while a computer system is
running. These include internal and external hardware errors, such as a memory error,
or a device failure or malfunction; and various software errors. In each case, the OS
must provide a response that clears the error condition with the least impact on running
applications. The response may range from ending the program that caused the error,
to retrying the operation, to simply reporting the error to the application, Production of
dumps, traces, error messages, and other debugging and error detecting aids.
Booting the computer: Booting is the process of starting or restarting the computer. If
computer is switched off completely and then turned on then it is cold booting. If
computer is restarted then it is warm booting. Booting of the computer is done by OS.
Coordination between other software and users: An OS enables coordination of
hardware components, coordination and assignment of compilers, interpreters,
assemblers and other software to the various users of the computer systems.

Operating System Evolution


Operating system is divided into four generations, which are explained as follows −
First Generation (Serial Processing)
It is the beginning of the development of electronic computing systems which are
substitutes for mechanical computing systems. Because of the drawbacks in
mechanical computing systems like, the speed of humans to calculate is limited and
humans can easily make mistakes. In this generation there is no operating system, so
the computer system is given instructions which must be done directly.
Example − Type of operating system and devices used is Plug Boards.
Second Generation (Batch Processing)
The Batch processing system was introduced in the second generation, where a job or
a task that can be done in a series, and then executed sequentially. In this generation,
the computer system is not equipped with an operating system, but several operating
system functions exist like FMS and IBSYS.
Example − Type of operating system and devices used is Batch systems.
Third Generation (Multi programing,Time sharing)
The development of the operating system was developed to serve multiple users at
once in the third generation. Here the interactive users can communicate through an
online terminal to a computer, so the operating system becomes multi-user and
multiprogramming.
Example − Type of operating system and devices used is Multiprogramming.
Fourth Generation (Personal Computers)
In this generation the operating system is used for computer networks where users are
aware of the existence of computers that are connected to one another.
At this generation users are also comforted with a Graphical User Interface (GUI), which
is an extremely comfortable graphical computer interface, and the era of distributed
computing has also begun.
With the occurrence of new wearable devices like Smart Watches, Smart Glasses,
VRGears, and others, the demand for conventional operating systems has also
increased.
And, with the onset of new devices like wearable devices, which includes Smart
Watches, Smart Glasses, VR gears etc, the demand for unconventional operating
systems is also rising.
Example − Type of operating system and devices used is personal computers
Operating-System Structure

Simple Structure
 Operating systems such as MS-DOS and the original UNIX did not have well-
defined structures.
 There was no CPU Execution Mode (user and kernel), and so errors in
applications could cause the whole system to crash.

Monolithic

 Functionality of the OS is invoked with simple function calls within the kernel,
which is one large program.
 Device drivers are loaded into the running kernel and become part of the kernel.
Layered Approach

This approach breaks up the operating system into different layers.

 This allows implementers to change the inner workings, and increases


modularity.
 As long as the external interface of the routines don’t change, developers have
more freedom to change the inner workings of the routines.
 With the layered approach, the bottom layer is the hardware, while the highest
layer is the user interface.
o The main advantage is simplicity of construction and debugging.
o The main difficulty is defining the various layers.
o The main disadvantage is that the OS tends to be less efficient than
other implementations.

Microkernels
This structures the operating system by removing all nonessential portions of the kernel
and implementing them as system and user level programs.

 Generally they provide minimal process and memory management, and a


communications facility.
 Communication between components of the OS is provided by message
passing.

The benefits of the microkernel are as follows:

 Extending the operating system becomes much easier.


 Any changes to the kernel tend to be fewer, since the kernel is smaller.
 The microkernel also provides more security and reliability.

Main disadvantage is poor performance due to increased system overhead from


message passing.

System Calls
The system call provides an interface to the operating system services.

The interface between a process and an operating system is provided by system calls. In
general, system calls are available as assembly language instructions. They are also included in
the manuals used by the assembly level programmers. System calls are usually made when a
process in user mode requires access to a resource. Then it requests the kernel to provide the
resource via a system call.

In general, system calls are required in the following situations −

 If a file system requires the creation or deletion of files. Reading and writing from
files also require a system call.
 Creation and management of new processes.
 Network connections also require system calls. This includes sending and
receiving packets.
 Access to a hardware devices such as a printer, scanner etc. requires a system
call.

Types of System Calls

1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

Process Control
Process control is the system call that is used to direct the processes. Some
process control examples include creating, load, abort, end, execute,
process, terminate the process, etc.

File Management
File management is a system call that is used to handle the files. Some file
management examples include creating files, delete files, open, close, read,
write, etc.

Device Management
Device management is a system call that is used to deal with devices. Some
examples of device management include read, device, write, get device
attributes, release device, etc.

Information Maintenance
Information maintenance is a system call that is used to maintain
information. There are some examples of information maintenance, including
getting system data, set time or date, get time or date, set system data, etc.

Communication
Communication is a system call that is used for communication. There are
some examples of communication, including create, delete communication
connections, send, receive messages, etc.
System Programs
System programs provide an environment where programs can be developed and executed. In
the simplest sense, system programs also provide a bridge between the user interface and
system calls. In reality, they are much more complex. For example, a compiler is a complex
system program.

The system program serves as a part of the operating system. It traditionally lies
between the user interface and the system calls. The user view of the system is actually
defined by system programs and not system calls because that is what they interact
with and system programs are closer to the user interface.
An image that describes system programs in the operating system hierarchy is as
follows −
In the above image, system programs as well as application programs form a bridge between
the user interface and the system calls. So, from the user view the operating system observed is
actually the system programs and not the system calls.

System Programs can be divided into these categories :

1. File Management –
A file is a collection of specific information stored in the memory of a
computer system. File management is defined as the process of
manipulating files in the computer system, its management includes the
process of creating, modifying and deleting files.
 It helps to create new files in the computer system and placing them at
specific locations.
 It helps in easily and quickly locating these files in the computer system.
 It makes the process of sharing files among different users very easy and
user-friendly.
 It helps to store files in separate folders known as directories.
 These directories help users to search files quickly or to manage files
according to their types of uses.

 It helps users to modify the data of files or to modify the name of files in
directories.

2. Status Information –
Information like date, time amount of available memory, or disk space is
asked by some users. Others providing detailed performance, logging, and
debugging information which is more complex. All this information is
formatted and displayed on output devices or printed. Terminal or other
output devices or files or a window of GUI is used for showing the output of
programs.

3. File Modification –
For modifying the contents of files we use this. For Files stored on disks or
other storage devices, we used different types of editors. For searching
contents of files or perform transformations of files we use special
commands.

4. Programming-Language support –
For common programming languages, we use Compilers, Assemblers,
Debuggers, and interpreters which are already provided to users. It provides
all support to users. We can run any programming language. All languages
of importance are already provided.

5. Program Loading and Execution –


When the program is ready after Assembling and compilation, it must be
loaded into memory for execution. A loader is part of an operating system
that is responsible for loading programs and libraries. It is one of the
essential stages for starting a program. Loaders, relocatable loaders, linkage
editors, and Overlay loaders are provided by the system.

6. Communications –
Virtual connections among processes, users, and computer systems are
provided by programs. Users can send messages to another user on their
screen, User can send e-mail, browsing on web pages, remote login, the
transformation of files from one user to another.

Some examples of system program in O.S. are –


 Windows 10
 Mac OS X
 Ubuntu
 Linux
 Unix
 Android
 Anti-virus
 Disk formatting
 Computer language translators
Virtual Machines
Virtual Machine abstracts the hardware of our personal computer such as
CPU, disk drives, memory, NIC (Network Interface Card) etc, into many
different execution environments as per our requirements, hence giving us a
feel that each execution environment is a single computer. For example,
VirtualBox.
When we run different processes on an operating system, it creates an illusion
that each process is running on a different processor having its own virtual
memory, with the help of CPU scheduling and virtual-memory techniques.
There are additional features of a process that cannot be provided by the
hardware alone like system calls and a file system. The virtual machine
approach does not provide these additional functionalities but it only provides
an interface that is same as basic hardware. Each process is provided with a
virtual copy of the underlying computer system.
We can create a virtual machine for several reasons, all of which are
fundamentally related to the ability to share the same basic hardware yet can
also support different execution environments, i.e., different operating systems
simultaneously.
The main drawback with the virtual-machine approach involves disk systems.
Let us suppose that the physical machine has only three disk drives but wants
to support seven virtual machines. Obviously, it cannot allocate a disk drive to
each virtual machine, because virtual-machine software itself will need
substantial disk space to provide virtual memory and spooling. The solution is to
provide virtual disks.
Users are thus given their own virtual machines. After which they can run any of
the operating systems or software packages that are available on the
underlying machine. The virtual-machine software is concerned with multi-
programming multiple virtual machines onto a physical machine, but it does not
need to consider any user-support software. This arrangement can provide a
useful way to divide the problem of designing a multi-user interactive system,
into two smaller pieces.
Advantages:
1. There are no protection problems because each virtual machine is
completely isolated from all other virtual machines.
2. Virtual machine can provide an instruction set architecture that differs from
real computers.
3. Easy maintenance, availability and convenient recovery.
Disadvantages:
1. When multiple virtual machines are simultaneously running on a host
computer, one virtual machine can be affected by other running virtual
machines, depending on the workload.
2. Virtual machines are not as efficient as a real one when accessing the
hardware.

Process
A process is basically a program in execution. The execution of a process must
progress in a sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned in
the program.
When a program is loaded into the memory and it becomes a process, it can be
divided into four sections ─ stack, heap, text and data. The following image shows a
simplified layout of a process inside main memory −
Process Scheduling
The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis
of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating systems.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.

Process Scheduling Queues


The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a
separate queue for each of the process states and PCBs of all processes in the same
execution state are placed in the same queue. When the state of a process is changed,
its PCB is unlinked from its current queue and moved to its new state queue.
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
 Device queues − The processes which are blocked due to unavailability of an
I/O device constitute this queue.

Schedulers
Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to
decide which process to run. Schedulers are of three types −

 Long-Term Scheduler
 Short-Term Scheduler
 Medium-Term Scheduler

Long Term Scheduler


It is also called a job scheduler. A long-term scheduler determines which programs
are admitted to the system for processing. It selects processes from the queue and
loads them into memory for execution. Process loads into the memory for CPU
scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as
I/O bound and processor bound. It also controls the degree of multiprogramming. If the
degree of multiprogramming is stable, then the average rate of process creation must
be equal to the average departure rate of processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.

Short Term Scheduler


It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of ready
state to running state of the process. CPU scheduler selects a process among the
processes that are ready to execute and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster than long-term schedulers.

Medium Term Scheduler


Medium-term scheduling is a part of swapping. It removes the processes from the
memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended
processes cannot make any progress towards completion. In this condition, to remove
the process from memory and make space for other processes, the suspended
process is moved to the secondary storage. This process is called swapping, and the
process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.
Operations on Processes
Process: A process is an activity of executing a program. Basically, it is a
program under execution. Every process needs certain resources to
complete its task.

Operation on a Process:
The execution of a process is a complex activity. It involves various
operations. Following are the operations that are performed while execution
of a process:

1. Creation: This the initial step of process execution activity. Process creation
means the construction of a new process for the execution. This might be
performed by system, user or old process itself. There are several events that
leads to the process creation. Some of the such events are following:
 When we start the computer, system creates several background processes.
 A user may request to create a new process.
 A process can create a new process itself while executing.
 Batch system takes initiation of a batch job.
2. Scheduling/Dispatching: The event or activity in which the state of the
process is changed from ready to running. It means the operating system puts
the process from ready state into the running state. Dispatching is done by
operating system when the resources are free or the process has higher priority
than the ongoing process. There are various other cases in which the process
in running state is preempted and process in ready state is dispatched by the
operating system.
3. Blocking: When a process invokes an input-output system call that blocks
the process and operating system put in block mode. Block mode is basically a
mode where process waits for input-output. Hence on the demand of process
itself, operating system blocks the process and dispatches another process to
the processor. Hence, in process blocking operation, the operating system puts
the process in ‘waiting’ state.
4. Preemption: When a timeout occurs that means the process hadn’t been
terminated in the allotted time interval and next process is ready to execute,
then the operating system preempts the process. This operation is only valid
where CPU scheduling supports preemption. Basically this happens in priority
scheduling where on the incoming of high priority process the ongoing process
is preempted. Hence, in process preemption operation, the operating system
puts the process in ‘ready’ state.
5. Termination: Process termination is the activity of ending the process. In
other words, process termination is the relaxation of computer resources taken
by the process for the execution. Like creation, in termination also there may be
several events that may lead to the process termination. Some of them are:
 Process completes its execution fully and it indicates to the OS that it has
finished.
 Operating system itself terminates the process due to service errors.
 There may be problem in hardware that terminates the process.
 One process can be terminated by another process.

Cooperating processes
In the computer system, there are many processes which may be either
independent processes or cooperating processes that run in the operating
system. A process is said to be independent when it cannot affect or be
affected by any other processes that are running the system. It is clear that
any process which does not share any data (temporary or persistent) with
any another process then the process independent. On the other hand, a
cooperating process is one which can affect or affected by any another
process that is running on the computer. the cooperating process is one
which shares data with another process.

There are several reasons for providing an environment that allows process
cooperation:

 Information sharing
In the information sharing at the same time, many users may want the same
piece of information(for instance, a shared file) and we try to provide that
environment in which the users are allowed to concurrent access to these
types of resources.
 Computation speedup
When we want a task that our process run faster so we break it into a
subtask, and each subtask will be executing in parallel with another one. It is
noticed that the speedup can be achieved only if the computer has multiple
processing elements (such as CPUs or I/O channels).
 Modularity
In the modularity, we are trying to construct the system in such a modular
fashion, in which the system dividing its functions into separate processes.
 Convenience
An individual user may have many tasks to perform at the same time and
the user is able to do his work like editing, printing and compiling.

A cooperating process is one that can affect or be affected by other


process executing in the system cooperating process an:

1. Directly share a logical address data space (i.e. code & data) -
This may result in data inconsistency. It is implemented on threads.
2. Share data only through files/ messages - So we will deal with
various to order....orderly execution of cooperating process so that
data consistency is maintained.

3. Example- producer-consumer problem


4. There is a producer and a consumer, producer produces on the item
and places it into buffer whereas consumer consumes that item. For
example, a print program produces characters that are consumed by
the printer driver. A compiler may produce assembly code, which is
consumed by an assembler. The assembler, in turn, may produce
objects modules which are consumed by the loader.
5. Producer process
6. while(true)
7. {
8. produce an item &
9. while(counter = = buffer-size);
10. buffer[int] = next produced;
11. in = (in+1) % buffer- size;
12. counter ++;
13. }
14. Consumer process
15. While(true)
16. {
17. while (counter = = 0);
18. next consumed = buffer[out];
19. out= (out+1) % buffer size;
20. counter--;
21. }

Here,

 in variable is used by producer t identify the next empty slot in


the buffer.
 out variable is used by the consumer to identify where it has to the
consumer the item.
 counter is used by producer and consumer to identify the number of
filled slots in the buffer.

Shared Resources

1. buffer
2. counter

When producer and consumer are not executed can current then
inconsistency arises. Here the value of a counter that is used by both
producer and consumer will be wrong if both are executed concurrently
without any control. The producer and consumer processes share the
following variables:

var n;
type item = .....;
var Buffer : array [0,n-1] of item;
In, out:0..n-1;

With the variables in and out initialized to the value 0. In The shared buffer
there are two logical pointers; in and out that is implemented in the form of
a circular array. The in variables points to the next free position in the buffer
and out variable points to the first full position in the buffer. When, in =
out the buffer is empty and when in+1 mod n = out the buffer is full.
Inter Process Communication (IPC)

A process can be of two types:


 Independent process.
 Co-operating process.
An independent process is not affected by the execution of other processes
while a co-operating process can be affected by other executing processes.
Though one can think that those processes, which are running independently,
will execute very efficiently, in reality, there are many situations when co-
operative nature can be utilized for increasing computational speed,
convenience, and modularity. Inter-process communication (IPC) is a
mechanism that allows processes to communicate with each other and
synchronize their actions. The communication between these processes can be
seen as a method of co-operation between them. Processes can communicate
with each other through both:

1. Shared Memory
2. Message passing
Figure 1 below shows a basic structure of communication between processes
via the shared memory method and via the message passing method.

An operating system can implement both methods of communication. First, we


will discuss the shared memory methods of communication and then message
passing. Communication between processes using shared memory requires
processes to share some variable, and it completely depends on how the
programmer will implement it. One way of communication using shared memory
can be imagined like this: Suppose process1 and process2 are executing
simultaneously, and they share some resources or use some information from
another process. Process1 generates information about certain computations or
resources being used and keeps it as a record in shared memory. When
process2 needs to use the shared information, it will check in the record stored
in shared memory and take note of the information generated by process1 and
act accordingly. Processes can use shared memory for extracting information
as a record from another process as well as for delivering any specific
information to other processes.
Let’s discuss an example of communication between processes using the
shared memory method.

i) Shared Memory Method


Ex: Producer-Consumer problem
There are two processes: Producer and Consumer. The producer produces
some items and the Consumer consumes that item. The two processes share a
common space or memory location known as a buffer where the item produced
by the Producer is stored and from which the Consumer consumes the item if
needed. There are two versions of this problem: the first one is known as the
unbounded buffer problem in which the Producer can keep on producing items
and there is no limit on the size of the buffer, the second one is known as the
bounded buffer problem in which the Producer can produce up to a certain
number of items before it starts waiting for Consumer to consume it. We will
discuss the bounded buffer problem. First, the Producer and the Consumer will
share some common memory, then the producer will start producing items. If
the total produced item is equal to the size of the buffer, the producer will wait
to get it consumed by the Consumer. Similarly, the consumer will first check for
the availability of the item. If no item is available, the Consumer will wait for the
Producer to produce it. If there are items available, Consumer will consume
them. The pseudo-code to demonstrate is provided below:
Shared Data between the two Processes
 C

#define buff_max 25

#define mod %

struct item{

// different member of the produced data

// or consumed data

---------

// An array is needed for holding the items.

// This is the shared place which will be

// access by both process

// item shared_buff [ buff_max ];

// Two variables which will keep track of

// the indexes of the items produced by producer

// and consumer The free index points to


// the next free index. The full index points to

// the first full index.

int free_index = 0;

int full_index = 0;

Producer Process Code

 C

item nextProduced;

while(1){

// check if there is no space

// for production.

// if so keep waiting.

while((free_index+1) mod buff_max == full_index);

shared_buff[free_index] = nextProduced;

free_index = (free_index + 1) mod buff_max;


}

Consumer Process Code

 C

item nextConsumed;

while(1){

// check if there is an available

// item for consumption.

// if not keep on waiting for

// get them produced.

while((free_index == full_index);

nextConsumed = shared_buff[full_index];

full_index = (full_index + 1) mod buff_max;

In the above code, the Producer will start producing again when the
(free_index+1) mod buff max will be free because if it it not free, this implies
that there are still items that can be consumed by the Consumer so there is no
need to produce more. Similarly, if free index and full index point to the same
index, this implies that there are no items to consume.

ii) Messaging Passing Method


Now, We will start our discussion of the communication between processes via
message passing. In this method, processes communicate with each other
without using any kind of shared memory. If two processes p1 and p2 want to
communicate with each other, they proceed as follows:

 Establish a communication link (if a link already exists, no need to establish


it again.)
 Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destination) or send(message)
– receive(message, host) or receive(message)

The message size can be of fixed size or of variable size. If it is of fixed size, it
is easy for an OS designer but complicated for a programmer and if it is of
variable size then it is easy for a programmer but complicated for the OS
designer. A standard message can have two parts: header and body.
The header part is used for storing message type, destination id, source id,
message length, and control information. The control information contains
information like what to do if runs out of buffer space, sequence number,
priority. Generally, message is sent using FIFO style.

Message Passing through Communication Link.


Direct and Indirect Communication link
Now, We will start our discussion about the methods of implementing
communication links. While implementing the link, there are some questions
that need to be kept in mind like :

1. How are links established?


2. Can a link be associated with more than two processes?
3. How many links can there be between every pair of communicating
processes?
4. What is the capacity of a link? Is the size of a message that the link can
accommodate fixed or variable?
5. Is a link unidirectional or bi-directional?
A link has some capacity that determines the number of messages that can
reside in it temporarily for which every link has a queue associated with it which
can be of zero capacity, bounded capacity, or unbounded capacity. In zero
capacity, the sender waits until the receiver informs the sender that it has
received the message. In non-zero capacity cases, a process does not know
whether a message has been received or not after the send operation. For this,
the sender must communicate with the receiver explicitly. Implementation of the
link depends on the situation, it can be either a direct communication link or an
in-directed communication link.
Direct Communication links are implemented when the processes use a
specific process identifier for the communication, but it is hard to identify the
sender ahead of time.
For example the print server.
In-direct Communication is done via a shared mailbox (port), which consists
of a queue of messages. The sender keeps the message in mailbox and the
receiver picks them up.

Message Passing through Exchanging the Messages.


Synchronous and Asynchronous Message Passing:
A process that is blocked is one that is waiting for some event, such as a
resource becoming available or the completion of an I/O operation. IPC is
possible between the processes on same computer as well as on the processes
running on different computer i.e. in networked/distributed system. In both
cases, the process may or may not be blocked while sending a message or
attempting to receive a message so message passing may be blocking or non-
blocking. Blocking is considered synchronous and blocking send means the
sender will be blocked until the message is received by receiver.
Similarly, blocking receive has the receiver block until a message is available.
Non-blocking is considered asynchronous and Non-blocking send has the
sender sends the message and continue. Similarly, Non-blocking receive has
the receiver receive a valid message or null. After a careful analysis, we can
come to a conclusion that for a sender it is more natural to be non-blocking after
message passing as there may be a need to send the message to different
processes. However, the sender expects acknowledgment from the receiver in
case the send fails. Similarly, it is more natural for a receiver to be blocking
after issuing the receive as the information from the received message may be
used for further execution. At the same time, if the message send keep on
failing, the receiver will have to wait indefinitely. That is why we also consider
the other possibility of message passing. There are basically three preferred
combinations:

 Blocking send and blocking receive


 Non-blocking send and Non-blocking receive
 Non-blocking send and Blocking receive (Mostly used)
In Direct message passing, The process which wants to communicate must
explicitly name the recipient or sender of the communication.
e.g. send(p1, message) means send the message to p1.
Similarly, receive(p2, message) means to receive the message from p2.
In this method of communication, the communication link gets established
automatically, which can be either unidirectional or bidirectional, but one link
can be used between one pair of the sender and receiver and one pair of
sender and receiver should not possess more than one pair of links. Symmetry
and asymmetry between sending and receiving can also be implemented i.e.
either both processes will name each other for sending and receiving the
messages or only the sender will name the receiver for sending the message
and there is no need for the receiver for naming the sender for receiving the
message. The problem with this method of communication is that if the name of
one process changes, this method will not work.
In Indirect message passing, processes use mailboxes (also referred to as
ports) for sending and receiving messages. Each mailbox has a unique id and
processes can communicate only if they share a mailbox. Link established only
if processes share a common mailbox and a single link can be associated with
many processes. Each pair of processes can share several communication
links and these links may be unidirectional or bi-directional. Suppose two
processes want to communicate through Indirect message passing, the
required operations are: create a mailbox, use this mailbox for sending and
receiving messages, then destroy the mailbox. The standard primitives used
are: send(A, message) which means send the message to mailbox A. The
primitive for the receiving the message also works in the same way
e.g. received (A, message). There is a problem with this mailbox
implementation. Suppose there are more than two processes sharing the same
mailbox and suppose the process p1 sends a message to the mailbox, which
process will be the receiver? This can be solved by either enforcing that only
two processes can share a single mailbox or enforcing that only one process is
allowed to execute the receive at a given time or select any process randomly
and notify the sender about the receiver. A mailbox can be made private to a
single sender/receiver pair and can also be shared between multiple
sender/receiver pairs. Port is an implementation of such mailbox that can have
multiple senders and a single receiver. It is used in client/server applications (in
this case the server is the receiver). The port is owned by the receiving process
and created by OS on the request of the receiver process and can be destroyed
either on request of the same receiver processor when the receiver terminates
itself. Enforcing that only one process is allowed to execute the receive can be
done using the concept of mutual exclusion. Mutex mailbox is created which is
shared by n process. The sender is non-blocking and sends the message. The
first process which executes the receive will enter in the critical section and all
other processes will be blocking and will wait.
Now, let’s discuss the Producer-Consumer problem using the message passing
concept. The producer places items (inside messages) in the mailbox and the
consumer can consume an item when at least one message present in the
mailbox. The code is given below:
Producer Code

 C

void Producer(void){

int item;

Message m;

while(1){

receive(Consumer, &m);

item = produce();

build_message(&m , item ) ;
send(Consumer, &m);

Consumer Code

 C

void Consumer(void){

int item;

Message m;

while(1){

receive(Producer, &m);

item = extracted_item();

send(Producer, &m);

consume_item(item);

}
Client/Server Communication
Client/Server communication involves two components, namely a client and a server.
They are usually multiple clients in communication with a single server. The clients send
requests to the server and the server responds to the client requests.
There are three main methods to client/server communication. These are given as
follows −

Sockets
Sockets facilitate communication between two processes on the same machine or
different machines. They are used in a client/server framework and consist of the IP
address and port number. Many application protocols use sockets for data connection
and data transfer between a client and a server.
Socket communication is quite low-level as sockets only transfer an unstructured byte
stream across processes. The structure on the byte stream is imposed by the client and
server applications.
A diagram that illustrates sockets is as follows −

Remote Procedure Calls


These are interprocess communication techniques that are used for client-server based
applications. A remote procedure call is also known as a subroutine call or a function
call.
A client has a request that the RPC translates and sends to the server. This request
may be a procedure or a function call to a remote server. When the server receives the
request, it sends the required response back to the client.
A diagram that illustrates remote procedure calls is given as follows −
Pipes
These are interprocess communication methods that contain two end points. Data is
entered from one end of the pipe by a process and consumed from the other end by the
other process.
The two different types of pipes are ordinary pipes and named pipes. Ordinary pipes
only allow one way communication. For two way communication, two pipes are
required. Ordinary pipes have a parent child relationship between the processes as the
pipes can only be accessed by processes that created or inherited them.
Named pipes are more powerful than ordinary pipes and allow two way communication.
These pipes exist even after the processes using them have terminated. They need to
be explicitly deleted when not required anymore.
A diagram that demonstrates pipes are given as follows −

Thread in Operating System


What is a Thread?
A thread is a path of execution within a process. A process can contain multiple
threads.
Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve
parallelism by dividing a process into multiple threads. For example, in a
browser, multiple tabs can be different threads. MS Word uses multiple threads:
one thread to format the text, another thread to process inputs, etc. More
advantages of multithreading are discussed below
Process vs Thread?
The primary difference is that threads within the same process run in a shared
memory space, while processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result
threads share with other threads their code section, data section, and OS
resources (like open files and signals). But, like process, a thread has its own
program counter (PC), register set, and stack space.
Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread
completes its execution, then its output can be immediately returned.
2. Faster context switch: Context switch time between threads is lower
compared to process context switch. Process context switching requires more
overhead from the CPU.
3. Effective utilization of multiprocessor system: If we have multiple threads in a
single process, then we can schedule multiple threads on multiple processor.
This will make process execution faster.
4. Resource sharing: Resources like code, data, and files can be shared among
all threads within a process.
Note: stack and registers can’t be shared among the threads. Each thread has
its own stack and registers.
5. Communication: Communication between multiple threads is easier, as the
threads shares common address space. while in process we have to follow
some specific communication technique for communication between two
process.
6. Enhanced throughput of the system: If a process is divided into multiple
threads, and each thread function is considered as one job, then the number of
jobs completed per unit of time is increased, thus increasing the throughput of
the system.
Types of Threads
There are two types of threads.
User Level Thread
Kernel Level Thread
Difference between User Level thread and Kernel Level thread

Multithreading in Operating System


A thread is a path which is followed during a program’s execution. Majority of
programs written now a days run as a single thread.Lets say, for example a
program is not capable of reading keystrokes while making drawings. These
tasks cannot be executed by the program at the same time. This problem can
be solved through multitasking so that two or more tasks can be executed
simultaneously.
Multitasking is of two types: Processor based and thread based. Processor
based multitasking is totally managed by the OS, however multitasking through
multithreading can be controlled by the programmer to some extent.
The concept of multi-threading needs proper understanding of these two terms
– a process and a thread. A process is a program being executed. A process
can be further divided into independent units known as threads.
A thread is like a small light-weight process within a process. Or we can say a
collection of threads is what is known as a process.
Applications –
Threading is used widely in almost every field. Most widely it is seen over
the internet now days where we are using transaction processing of every
type like recharges, online transfer, banking etc. Threading is a segment
which divide the code into small parts that are of very light weight and has
less burden on CPU memory so that it can be easily worked out and can
achieve goal in desired field. The concept of threading is designed due to
the problem of fast and regular changes in technology and less the work in
different areas due to less application. Then as says “need is the generation
of creation or innovation” hence by following this approach human mind
develop the concept of thread to enhance the capability of programming.
Multi Threading Models
Multi threading-It is a process of multiple threads executes at same time.
Many operating systems support kernel thread and user thread in a combined
way. Example of such system is Solaris. Multi threading model are of three
types.

Many to many model.


Many to one model.
one to one model.
Many to Many Model
In this model, we have multiple user threads multiplex to same or lesser number
of kernel level threads. Number of kernel level threads are specific to the
machine, advantage of this model is if a user thread is blocked we can schedule
others user thread to other kernel thread. Thus, System doesn’t block if a
particular thread is blocked.
It is the best multi threading model.

Many to One Model


In this model, we have multiple user threads mapped to one kernel thread. In
this model when a user thread makes a blocking system call entire process
blocks. As we have only one kernel thread and only one user thread can access
kernel at a time, so multiple threads are not able access multiprocessor at the
same time.

The thread management is done on the user level so it is more efficient.

One to One Model


In this model, one to one relationship between kernel and user thread. In this
model multiple thread can run on multiple processor. Problem with this model is
that creating a user thread requires the corresponding kernel thread.
As each user thread is connected to different kernel , if any user thread makes
a blocking system call, the other user threads won’t be blocked.

Threading issues:
The fork() and exec() system calls
The fork() is used to create a duplicate process. The meaning of the fork() and exec()
system calls change in a multithreaded program.
If one thread in a program which calls fork(), does the new process duplicate all threads,
or is the new process single-threaded? If we take, some UNIX systems have chosen to
have two versions of fork(), one that duplicates all threads and another that duplicates
only the thread that invoked the fork() system call.
If a thread calls the exec() system call, the program specified in the parameter to exec()
will replace the entire process which includes all threads.
Signal Handling
Generally, signal is used in UNIX systems to notify a process that a particular event has
occurred. A signal received either synchronously or asynchronously, based on the
source of and the reason for the event being signaled.
All signals, whether synchronous or asynchronous, follow the same pattern as given
below −
 A signal is generated by the occurrence of a particular event.
 The signal is delivered to a process.
Cancellation
Thread cancellation is the task of terminating a thread before it has completed.
For example − If multiple database threads are concurrently searching through a
database and one thread returns the result the remaining threads might be cancelled.
A target thread is a thread that is to be cancelled, cancellation of target thread may
occur in two different scenarios −
 Asynchronous cancellation − One thread immediately terminates the target
thread.
 Deferred cancellation − The target thread periodically checks whether it should
terminate, allowing it an opportunity to terminate itself in an ordinary fashion.
Thread polls
Multithreading in a web server, whenever the server receives a request it creates a
separate thread to service the request.
Some of the problems that arise in creating a thread are as follows −
 The amount of time required to create the thread prior to serving the request
together with the fact that this thread will be discarded once it has completed its
work.
 If all concurrent requests are allowed to be serviced in a new thread, there is no
bound on the number of threads concurrently active in the system.
 Unlimited thread could exhaust system resources like CPU time or memory.
A thread pool is to create a number of threads at process start-up and place them into a
pool, where they sit and wait for work.

You might also like