Deadlock-Free Packet Switching
Last Updated :
22 Feb, 2022
In computer networks deadlock is the most serious system failure. Deadlocks are situations in which packets are stuck in a loop and can never reach their destination no matter what sequence of moves are performed. Deadlock must be observed and carefully handled to avoid system failure.
Most of the networks that we have today are packet switching networks where the message are divided into packets and the packets are routed from source to destination.
Store-and-forward deadlocks:
This is the widely discussed deadlock in which the intermediate node may receive packets from different destinations and they have to store those packets on their local buffer and forward them to the next node according to their routing table, as the local buffer is of finite-size there occur a deadlock.
Example: a,b,c d are the four nodes, all of the nodes have buffer size 4. Suppose, node 'a' sending packets to 'd' via 'b' and node 'd' is also sending packets to 'a' via 'c'
Solution: Now, to reach node 'd' all the packets from 'b' must be transferred to 'c', and similarly to reach 'a' all the nodes from 'c' must reach 'b' but none of the nodes has an empty buffer.
Store and Forward DeadlockAssumptions for Solving Store and Forward Deadlock:
Some assumptions for solving the issue are as follows:
Model: The network is a graph G =(V, E), V is the set of processors and E is the set of Communication links. Each node has B buffers, and k, the length of the longest route taken by a packet in G
Moves: We see distributed computation as a set of moves i.e.; something happens in the environment and the node reacts to those.
- Generation: A node that either creates or receives a new packet 'p'. This node is known to be a source of the packet.
- Forwarding: The packet is forwarded to the new node having an empty buffer on its route.
- Consumption: When the packet reached the destination it is removed from the buffer.
Requirement for packet Switching:
- A controller algorithm that permits various moves in a network.
- The consumption of a packet (at its destination) is always allowed.
- The generation of a packet in a node where all buffers are empty is always allowed.
- The controller uses only local information
- A controller is said to be deadlock-free if it protects the network from deadlock
Solution for Store and Forward Deadlock:
To resolve this deadlock we have two solutions:
- Structured Buffer Pools
- Unstructured Buffer Pools
Structured Buffer Pools:
Methods using structured buffer pools will identify for a node and a packet a specific buffer that must be taken if a packet is generated or received. If this buffer is occupied, the packet cannot be accepted.
1. Buffer Graph(BG): It is a structured solution to resolve the deadlock, It is a virtual directed graph defined over the buffer in the network such that,
- Buffer Graph(BG) has no directed cycle i.e.; it is acyclic.
- For every Routing path, there must be a path in the buffer graph
[NOTE: The path through which the packet move is determined by the routing algorithm, and buffer management strategy is to set on which buffer the packet would be stored in the next node.]
2. Buffer Graph Controller:
- The generation of the packet is only allowed if there is an empty buffer in the node.
- Forwarding of a packet is only allowed if there is an empty buffer in the next node.
By this set of rules, the deadlock is prevented and packet loss is protected.
Disadvantage: Limited use of the storage buffer.
Unstructured Buffer Pools:
In methods using unstructured buffer pools, all buffers are equal; the method only prescribes whether or not a packet can be accepted, but does not determine in which buffer it must be placed.
1. Forward-count Controller(FC): It is an unstructured solution, For a packet p, let sp be the number of hops to reach the destination and fu be the number of buffers in the node then the controller accepts the packet p if sp<fu.
If B denotes the maximum number of free buffer in the node and k denotes the maximum number of hops to reach the destination then B>K always ensures a deadlock-free state.
From the above, we can say that the Forward count controller is a deadlock-free controller.
2. Forward State Controller(FS): It takes more information about the packet on the receiving end. As in the case if the number of empty buffer b is fewer than the path of the packet to reach the destination then we Forward State Controller as a deadlock-free controller.
3. Backward-count Controller(BC): It is a variant of forwarding count controller, a packet that has made some moves from its source to a destination node it should be accepted by Destination only if has at most some nonempty buffers i.e.; for a packet p, let tp be the number of hops it has made from it source then controller accepts packet p in a node if tp>k-fu.
By applying this in a network deadlock can be prevented therefore backward- count controller is said to be the deadlock-free controller.
4. Backward-State Controller(BS): Similar to forward-state controllers, in backward state controllers we can use more information about packets in the receiving node.
Relations Between FC, BC, FS, BS:
- FC ⊂ FS, i.e.
- BC ⊂ BS
- BC ⊂ FC
- BS ⊂ FS
Lattice diagram showing relationship between FC,BC,FS,BSSome other deadlocks are:
1. Progeny deadlock may arise when a packet p in the network can create another packet q, which violates the assumption that the network always allows forwarding and consumption of a packet. Progeny deadlock can be avoided by having multiple levels of the buffer graph
2. Copy-release deadlock may arise when the source holds a copy of the packet until an (end-to-end) acknowledgment for the packet is received from the destination. Two extensions of the buffer-graph principle are given by which copy release deadlock can be avoided.
3. Pacing deadlock may arise when the network contains nodes, with limited internal storage, that may refuse to consume messages until some other messages have been generated. Pacing deadlock can be avoided by making a distinction between peaceable packets and pacing responses.
4. Reassembly deadlock may arise in networks where large messages are divided into smaller packets for transmission and no packet can be removed from the network until all packets of the message have reached the destination. Reassembly deadlocks can be avoided by using separate groups of buffers for packet forwarding and reassembly.
Similar Reads
Distributed Systems Tutorial A distributed system is a system of multiple nodes that are physically separated but linked together using the network. Each of these nodes includes a small amount of the distributed operating system software. Every node in this system communicates and shares resources with each other and handles pr
8 min read
Basics of Distributed System
What is a Distributed System?A distributed system is a collection of independent computers that appear to the users of the system as a single coherent system. These computers or nodes work together, communicate over a network, and coordinate their activities to achieve a common goal by sharing resources, data, and tasks.Table o
7 min read
Types of Transparency in Distributed SystemIn distributed systems, transparency plays a pivotal role in abstracting complexities and enhancing user experience by hiding system intricacies. This article explores various types of transparencyâranging from location and access to failure and securityâessential for seamless operation and efficien
6 min read
What is Scalable System in Distributed System?In distributed systems, a scalable system refers to the ability of a networked architecture to handle increasing amounts of work or expand to accommodate growth without compromising performance or reliability. Scalability ensures that as demand growsâwhether in terms of user load, data volume, or tr
10 min read
Difference between Hardware and MiddlewareHardware and Middleware are both parts of a Computer. Hardware is the combination of physical components in a computer system that perform various tasks such as input, output, processing, and many more. Middleware is the part of software that is the communication medium between application and opera
4 min read
Difference between Parallel Computing and Distributed ComputingIntroductionParallel Computing and Distributed Computing are two important models of computing that have important roles in todayâs high-performance computing. Both are designed to perform a large number of calculations breaking down the processes into several parallel tasks; however, they differ in
5 min read
Difference between Loosely Coupled and Tightly Coupled Multiprocessor SystemWhen it comes to multiprocessor system architecture, there is a very fine line between loosely coupled and tightly coupled systems, and this is why that difference is very important when choosing an architecture for a specific system. A multiprocessor system is a system in which there are two or mor
5 min read
Design Issues of Distributed SystemDistributed systems are used in many real-world applications today, ranging from social media platforms to cloud storage services. They provide the ability to scale up resources as needed, ensure data is available even when a computer fails, and allow users to access services from anywhere. However,
8 min read
Communication & RPC in Distributed Systems
Features of Good Message Passing in Distributed SystemMessage passing is the interaction of exchanging messages between at least two processors. The cycle which is sending the message to one more process is known as the sender and the process which is getting the message is known as the receiver. In a message-passing system, we can send the message by
3 min read
What is Message Buffering?Remote Procedure Call (RPC) is a communication technology that is used by one program to make a request to another program for utilizing its service on a network without even knowing the network's details. The inter-process communication in distributed systems is performed using Message Passing. It
6 min read
Group Communication in Distributed SystemsIn distributed systems, efficient group communication is crucial for coordinating activities among multiple entities. This article explores the challenges and solutions involved in facilitating reliable and ordered message delivery among members of a group spread across different nodes or networks.G
8 min read
What is Remote Procedural Call (RPC) Mechanism in Distributed System?A remote Procedure Call (RPC) is a protocol in distributed systems that allows a client to execute functions on a remote server as if they were local. RPC simplifies network communication by abstracting the complexities, making it easier to develop and integrate distributed applications efficiently.
9 min read
Stub Generation in Distributed SystemA stub is a piece of code that translates parameters sent between the client and server during a remote procedure call in distributed computing. An RPC's main purpose is to allow a local computer (client) to call procedures on another computer remotely (server) because the client and server utilize
3 min read
Server Management in Distributed SystemEffective server management in distributed systems is crucial for ensuring performance, reliability, and scalability. This article explores strategies and best practices for managing servers across diverse environments, focusing on configuration, monitoring, and maintenance to optimize the operation
12 min read
Difference Between RMI and DCOMIn this article, we will see differences between Remote Method Invocation(RMI) and Distributed Component Object Model(DCOM). Before getting into the differences, let us first understand what each of them actually means. RMI applications offer two separate programs, a server, and a client. There are
2 min read
Synchronization in Distributed System
Source & Process Management
What is Task Assignment Approach in Distributed System?A Distributed System is a Network of Machines that can exchange information with each other through Message-passing. It can be very useful as it helps in resource sharing. In this article, we will see the concept of the Task Assignment Approach in Distributed systems. Resource Management:One of the
6 min read
Difference Between Load Balancing and Load Sharing in Distributed SystemA distributed system is a computing environment in which different components are dispersed among several computers (or other computing devices) connected to a network. This article clarifies the distinctions between load balancing and load sharing in distributed systems, highlighting their respecti
4 min read
Process Migration in Distributed SystemProcess migration in distributed systems involves relocating a process from one node to another within a network. This technique optimizes resource use, balances load, and improves fault tolerance, enhancing overall system performance and reliability.Process Migration in Distributed SystemImportant
9 min read
Distributed Database SystemA distributed database is basically a database that is not limited to one system, it is spread over different sites, i.e, on multiple computers or over a network of computers. A distributed database system is located on various sites that don't share physical components. This may be required when a
5 min read
Multimedia DatabaseA Multimedia database is a collection of interrelated multimedia data that includes text, graphics (sketches, drawings), images, animations, video, audio etc and have vast amounts of multisource multimedia data. The framework that manages different types of multimedia data which can be stored, deliv
5 min read
Mechanism for Building Distributed File SystemBuilding a Distributed File System (DFS) involves intricate mechanisms to manage data across multiple networked nodes. This article explores key strategies for designing scalable, fault-tolerant systems that optimize performance and ensure data integrity in distributed computing environments.Mechani
8 min read
Distributed File System
What is DFS (Distributed File System)? A Distributed File System (DFS) is a file system that is distributed on multiple file servers or multiple locations. It allows programs to access or store isolated files as they do with the local ones, allowing programmers to access files from any network or computer. In this article, we will discus
8 min read
File Service Architecture in Distributed SystemFile service architecture in distributed systems manages and provides access to files across multiple servers or locations. It ensures efficient storage, retrieval, and sharing of files while maintaining consistency, availability, and reliability. By using techniques like replication, caching, and l
12 min read
File Models in Distributed SystemFile Models in Distributed Systems" explores how data organization and access methods impact efficiency across networked nodes. This article examines structured and unstructured models, their performance implications, and the importance of scalability and security in modern distributed architectures
6 min read
File Caching in Distributed File SystemsFile caching enhances I/O performance because previously read files are kept in the main memory. Because the files are available locally, the network transfer is zeroed when requests for these files are repeated. Performance improvement of the file system is based on the locality of the file access
12 min read
What is Replication in Distributed System?Replication in distributed systems involves creating duplicate copies of data or services across multiple nodes. This redundancy enhances system reliability, availability, and performance by ensuring continuous access to resources despite failures or increased demand.Replication in Distributed Syste
9 min read
What is Distributed Shared Memory and its Advantages?Distributed shared memory can be achieved via both software and hardware. Hardware examples include cache coherence circuits and network interface controllers. In contrast, software DSM systems implemented at the library or language level are not transparent and developers usually have to program th
4 min read
Consistency Model in Distributed SystemIt might be difficult to guarantee that all data copies in a distributed system stay consistent over several nodes. The guidelines for when and how data updates are displayed throughout the system are established by consistency models. Various approaches, including strict consistency or eventual con
6 min read
Distributed Algorithm
Advanced Distributed System
Flat & Nested Distributed TransactionsIntroduction : A transaction is a series of object operations that must be done in an ACID-compliant manner. Atomicity - The transaction is completed entirely or not at all.Consistency - It is a term that refers to the transition from one consistent state to another.Isolation - It is carried out sep
6 min read
Transaction Recovery in Distributed SystemIn distributed systems, ensuring the reliable recovery of transactions after failures is crucial. This article explores essential recovery techniques, including checkpointing, logging, and commit protocols, while addressing challenges in maintaining ACID properties and consistency across nodes to en
10 min read
Two Phase Commit Protocol (Distributed Transaction Management)Consider we are given with a set of grocery stores where the head of all store wants to query about the available sanitizers inventory at all stores in order to move inventory store to store to make balance over the quantity of sanitizers inventory at all stores. The task is performed by a single tr
5 min read
Scheduling and Load Balancing in Distributed SystemIn this article, we will go through the concept of scheduling and load balancing in distributed systems in detail. Scheduling in Distributed Systems:The techniques that are used for scheduling the processes in distributed systems are as follows: Task Assignment Approach: In the Task Assignment Appro
7 min read
Distributed System - Types of Distributed DeadlockA Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource occupied by some other process. When this situation arises, it is known as Deadlock. DeadlockA Distributed System is a Network of Machines that can exchange info
4 min read
Difference between Uniform Memory Access (UMA) and Non-uniform Memory Access (NUMA)In computer architecture, and especially in Multiprocessors systems, memory access models play a critical role that determines performance, scalability, and generally, efficiency of the system. The two shared-memory models most frequently used are UMA and NUMA. This paper deals with these shared-mem
5 min read