Inter Process Communication (1) (1)
Inter Process Communication (1) (1)
IPC refers to the mechanisms and techniques that operating systems use to facilitate
communication between different processes. In a multitasking environment, numerous
processes are running concurrently, and IPC serves as the bridge that allows them to
exchange information and coordinate their actions.
1. Data Sharing: Processes often need to share data. For example, a text
editor may need to pass data to a printer process to generate a hard copy.
4. Resource Sharing: IPC helps manage and share resources, like file
access, memory, or hardware devices among processes.
IPC Methods
There are several methods of IPC used in modern operating systems. Each method has
its strengths and weaknesses, and the choice of method depends on the specific
requirements of the processes involved:
4. Pipes and FIFOs: These are used for communication between related
processes. Pipes are unidirectional, while FIFOs (named pipes) are
bidirectional and can be used between unrelated processes.
⁰
5. Signals: Processes can send signals to each other to notify about events or
requests. Signals are lightweight and are often used for process
management and error handling.
Applications of IPC
IPC is a fundamental component of modern operating systems and finds applications in
various scenarios:
1. Shell Pipelines: In Unix-like systems, the shell uses pipes to connect the
output of one command to the input of another.
2. Graphical User Interfaces (GUIs): GUI applications use IPC for event
handling, such as sending messages between windows or processes.
Disadvantages of IPC
⁰
MULTIPROCCESSING AND PARALLEL PROCCESSING IN OPERATING SYSTEM
and
“Multiprocessing is the use of two or more central processing units (CPUs) within a
single computer system
Multiprocessing is the use of two or more processors synchronizing within the same
system. Synchronization means that these processing units works on the same task but
with independent resources & their results are added in the final solution.
Multiple CPUs are linked together so that a job can be divided and executed more
quickly. When a job is completed, the results from all CPUs are compiled to provide the
final output. Jobs were required to share main memory, and they may often share other
system resources. Multiple CPUs can be used to run multiple tasks at the same time,
Advantages of Multiprocessing OS
Increased reliability: Processing tasks can be spread among numerous processors in
the multiprocessing system. This promotes reliability because if one processor fails, the
task can be passed on to another.
Increased throughout: More work could be done in less time as the number of
processors increases.
The economy of scale: Multiprocessor systems are less expensive than single-
processor computers because they share peripherals, additional storage devices, and
power sources.
⁰
Disadvatages of Multiprocessing OS
Multiprocessing operating systems are more complex and advanced since they manage
many CPUs at the same time.
Types of Multiprocessing OS
Symmetrical
Each processor in a symmetrical multiprocessing system runs the same copy of the OS,
makes its own decisions, and collaborates with other processes to keep the system
running smoothly.
Characteristics
Asymmetric
The processors in an asymmetric system have a master-slave relationship. In addition,
one processor may serve as a master or supervisor processor,
⁰
into memory, it is allocated a partition that is large enough to accommodate it. Variable
partitioning allows for more efficient memory utilization compared to fixed partitioning
but requires more complex memory management algorithms.
3. First Fit Allocation
First fit allocation is a memory allocation algorithm where the operating system allocates
the first available partition that is large enough to accommodate a process. This method
is simple and efficient but can lead to fragmentation, where the available memory is
broken up into small, unusable chunks.
4. Best Fit Allocation
Best fit allocation is a memory allocation algorithm where the operating system allocates
the smallest available partition that is large enough to accommodate a process. This
method helps reduce fragmentation but can be less efficient than first fit allocation in
terms of memory utilization.
5. Worst Fit Allocation
Worst fit allocation is a memory allocation algorithm where the operating system
allocates the largest available partition to a process. This method can lead to more
fragmentation compared to first fit and best fit allocation but can be useful for allocating
large processes.
6. Round Robin Scheduling
Round robin scheduling is a CPU scheduling algorithm where each process is assigned
a fixed time slice or quantum to execute. Once a process’s time slice expires, it is
preempted, and the next process in the queue is executed. Round robin scheduling
ensures fairness among processes but can lead to inefficient CPU utilization if the time
slices are too small.
7. Priority-Based Scheduling
Priority-based scheduling is a CPU scheduling algorithm where each process is
assigned a priority. The operating system schedules processes with higher priorities
before processes with lower priorities. Priority-based scheduling ensures that important
processes are executed first but can lead to starvation, where low-priority processes
never get a chance to execute.