File Caching in Distributed File Systems
Last Updated :
19 Apr, 2023
File caching enhances I/O performance because previously read files are kept in the main memory. Because the files are available locally, the network transfer is zeroed when requests for these files are repeated. Performance improvement of the file system is based on the locality of the file access pattern. Caching also helps in reliability and scalability.
File caching is an important feature of distributed file systems that helps to improve performance by reducing network traffic and minimizing disk access. In a distributed file system, files are stored across multiple servers or nodes, and file caching involves temporarily storing frequently accessed files in memory or on local disks to reduce the need for network access or disk access.
Here are some ways file caching is implemented in distributed file systems:
Client-side caching: In this approach, the client machine stores a local copy of frequently accessed files. When the file is requested, the client checks if the local copy is up-to-date and, if so, uses it instead of requesting the file from the server. This reduces network traffic and improves performance by reducing the need for network access.
Server-side caching: In this approach, the server stores frequently accessed files in memory or on local disks to reduce the need for disk access. When a file is requested, the server checks if it is in the cache and, if so, returns it without accessing the disk. This approach can also reduce network traffic by reducing the need to transfer files over the network.
Distributed caching: In this approach, the file cache is distributed across multiple servers or nodes. When a file is requested, the system checks if it is in the cache and, if so, returns it from the nearest server. This approach reduces network traffic by minimizing the need for data to be transferred across the network.
Advantages of file caching in distributed file systems include:
- Improved performance: By reducing network traffic and minimizing disk access, file caching can significantly improve the performance of distributed file systems.
- Reduced latency: File caching can reduce latency by allowing files to be accessed more quickly without the need for network access or disk access.
- Better resource utilization: File caching allows frequently accessed files to be stored in memory or on local disks, reducing the need for network or disk access and improving resource utilization.
However, there are also some disadvantages to file caching in distributed file systems, including:
- Increased complexity: File caching can add complexity to distributed file systems, requiring additional software and hardware to manage and maintain the cache.
- Cache consistency issues: Keeping the cache up-to-date can be a challenge, and inconsistencies between the cache and the actual file system can occur.
- Increased memory usage: File caching requires additional memory resources to store frequently accessed files, which can lead to increased
- memory usage on client machines and servers.
Overall, file caching is an important feature of distributed file systems that can improve performance and reduce latency. However, it also introduces some complexity and requires careful management to ensure cache consistency and efficient resource utilization.
The majority of today's distributed file systems employ some form of caching. File caching schemes are determined by a number of criteria, including cached data granularity, cache size (large/ small/ fixed/ dynamic), replacement policy, cache location, modification propagation mechanisms, and cache validation.
Cache Location: The file might be kept in the disc or main memory of the client or the server in a client-server system with memory and disk.

Server's Disk: It is always the original location where the file is saved. There is enough space here in case that file is modified and becomes longer. Additionally, the file is visible to all clients.
Advantages: There are no consistency issues because each file has only one copy. When a client wants to read a file, two transfers are required: from the server's disk to the main memory, and from the client's main memory to the server's disk.
Disadvantages:
- It's possible that both of these transfers will take some time. One part of the transfer time can be avoided by caching the file in the server's main memory to boost performance.
- Because main memory is limited, an algorithm will be required to determine which files or parts of files should be maintained in the cache. This algorithm will be based on two factors: the cache unit and the replacement mechanism to apply when the cache is full.
Server's Main Memory: The question is whether to cache the complete file or only the disk blocks when the file is cached in the server's main memory. If the full file is cached, it can be stored in contiguous locations, and high-speed transmission results in a good performance. Disk block caching makes the cache and disc space more efficient.
Standard caching techniques are employed to overcome the latter issue. When compared to memory references, cache references are quite rare. The oldest block can be picked for eviction in LRU (Least Recently Used). The cache copy can be discarded if there is an up-to-date copy on the disk. The cache data can also be written to the disk. Clients can easily and transparently access a cached file in the server's main memory. The server can easily keep disks and main memory copies of the file consistent. Only one copy of the file exists in the system, according to the client.
Client's disk: The data can also be saved on the client's hard drive. Although network transfer is reduced, in the event of a cache hit, the disk must be accessed. Because the changed data will be available in the event of data loss or a crash, this technique improves reliability. The information can then be recovered from the client's hard drive.
Even if the client is disconnected from the server, the file can still be accessed. Because access to the disk may be handled locally, there is no need to contact the server, this enhances scalability and dependability.
Advantages:
- Reliability increased as data can be recovered in case of data loss.
- The client's disk has a significantly larger storage capacity than the client's primary memory. It is possible to cache more data, resulting in the highest cache-hit ratio. The majority of distributed file systems employ a file-level data transfer architecture, in which the entire file is cached.
- Scalability is increased as access to the disk can be handled locally.
Disadvantages:
- The sole drawback is that disc caching is incompatible with disk-less workstations. Every cache requires disk access, resulting in a considerable increase in the response time. It must be decided whether to cache in the server's main memory or on the client's disc.
- Although server caching eliminates the need for disk access, network transfer is still required. Caching data on the client-side is a solution for reducing network transfer time. Whether the system should use the client's main memory or the disk, depends on whether the system needs to save space or improve performance.
- The access is slow if the disk has more space. The main memory of the server may be able to provide a file faster than the client's disc. Caching can be done on the client's disc if the file size is very high. The below figure shows the simplest way, i.e., avoid caching.

Client's Main Memory: Once it is agreed that the files should be cached in the client's memory, caching can take place in the user process's address space, the kernel, or a cache manager as a user process.
The second alternative is to cache the files in the address space of each user process, as shown:

The system-call library is in charge of the cache. The files are opened, closed, read, and written during the process execution. The library saves the most frequently used files so that they can be re-used if necessary. The updated files are returned to the server once the operation has been completed. When individual processes open and close files regularly, this technique works well.
It's fine for database managers, but not for programmers working in circumstances where the files might not be accessed again.
The file can be cached in the kernel instead of the user's process address space, as shown. This technique, however, necessitates many systems calls to access the file for each cache hit.

A separate user-level cache manager can be used to cache the files. As a result, the kernel no longer has to maintain the file system code, and it becomes more isolated and flexible. The kernel can decide on the allocation of memory space for the program vs. cache on run time. The kernel can store some of the cached files in the disk if the cache manager runs in virtual memory, and the blocks are brought to the main memory on cache hit.

Advantages:
- This technique is more isolated and flexible (as the kernel no longer has to maintain the file system code)
- When individual processes open and close files regularly, the access time decreases. So, Gain in performance is maximum.
- Allows for diskless workstations.
- Contributes to the scalability and reliability of the system.
Disadvantages:
- A separate user-level cache manager is required.
- Client caching principles have no value with virtual memory, although the cache manager can lock some frequently requested pages.
Cache Consistency - Cache Update Policy:
When the cache is located on the client's node, numerous users can access the same data or file at the same time in a file system. If all caches contain the same most current data, they are considered to be consistent. It's possible that the data will become inconsistent if some users modify the file. The distributed system that uses a DFS must keep its data copies consistent.
Depending on when to propagate changes to the server and how to validate the authenticity of cache data, many consistency strategies are provided. Write-through, write-on-close, and centralized control are the three types.
When the cache is located on the client's node & one user writes data to cache, it must also be visible to the other users as well. The written policy determines that when the writing is performed.
There are four cache update policies:
- Write-Through: When a new user edits a cache entry in this method, it is immediately written to the server. Any procedure that requires a file from the server will now always receive the most up-to-date information. Consider the following scenario: the client process reads the file, caches it, and then exits the process. Another client modifies the same file and sends the change to the server a short time later.
If a process is started on the first machine with the cached copy of the file, it will obtain an outdated copy. To avoid this, compare the time of modification of both copies, the cached copy on the client's machine and the uploaded copy on the server, to validate the file with the server. - Delayed Write: To reduce continuous network traffic, write all updates to the server periodically or batch them together. It's known as 'delayed-write.' This method enhances performance by allowing for a single bulk write operation rather than several tiny writes. The temporary file is not stored on the file server in this case.
- Write on close: One step forward is to only write the file back to the server once it has been closed. 'Write on close' is the name of the algorithm. The second write overwrites the first if two cached files are written back to back. It's comparable to what happens when two processes read or write in their own address space and then write back to the server in a single CPU system.
- Centralized Control: For tracking purposes, the client sends information about the files it has just opened to the server, which then performs read, write, or both activities. Multiple processes may read from the same file, but once one process has opened the file for writing, all other processes will be denied access. After the server receives notification that the file has been closed, it updates its table, and only then can additional users access the file.
Cache Validation Scheme:
When a cache's data is modified, the modification propagation policy tells when the master copy of the file on the server node is updated. It provides no information about when the file data in other nodes' caches are updated. Data from a file may be stored in the caches of many nodes at the same time.
When another client alters the data corresponding to the cache item in the master copy of the file on the server, the client's cache entry becomes outdated. It's required to check whether the data cached at a client node matches the master copy. If this is not the case, the cached data must be invalidated and a new version of the data must be requested from the server.
To check cache data's Validity, 2 schemes are :
- Client-initiated Approach: The client connects to the server and verifies that the data it has in its cache is consistent with the master copy. The checking can be done at different times as-
- Verify before each access: As here the server must be called each time an access is made, this negates the actual purpose of caching data.
- Verifying periodically: Validation is done at a regular predetermined interval
- Verify the opening of the file: The cache entry is checked when a file is opened.
- Server-initiated Approach: When a client opens a file, it informs the file server of the purpose for opening the file - reading, writing, or both. It is then the duty of the file server to keep track of which client is working on which file and in which mode(s). When the server identifies any chance of inconsistency when the file is used by various clients, it reacts.
- A client notifies the server of the closure, as well as any changes made to the file when it closes a file. The server then updates its database to reflect which clients have which files open in which modes.
- The server can deny/queue the request, or disable caching by requesting that all clients who have the file open remove it from their caches whenever a new client requests to open a file that is already open and the server discovers any inconsistency which is there/may occur.
Similar Reads
Distributed Systems Tutorial
A distributed system is a system of multiple nodes that are physically separated but linked together using the network. Each of these nodes includes a small amount of the distributed operating system software. Every node in this system communicates and shares resources with each other and handles pr
8 min read
Introduction to Distributed System
What is a Distributed System?
A distributed system is a collection of independent computers that appear to the users of the system as a single coherent system. These computers or nodes work together, communicate over a network, and coordinate their activities to achieve a common goal by sharing resources, data, and tasks.Table o
7 min read
Features of Distributed Operating System
A Distributed Operating System manages a network of independent computers as a unified system, providing transparency, fault tolerance, and efficient resource management. It integrates multiple machines to appear as a single coherent entity, handling complex communication, coordination, and scalabil
9 min read
Evolution of Distributed Computing Systems
In this article, we will see the history of distributed computing systems from the mainframe era to the current day to the best of my knowledge. It is important to understand the history of anything in order to track how far we progressed. The distributed computing system is all about evolution from
8 min read
Types of Transparency in Distributed System
In distributed systems, transparency plays a pivotal role in abstracting complexities and enhancing user experience by hiding system intricacies. This article explores various types of transparencyâranging from location and access to failure and securityâessential for seamless operation and efficien
6 min read
What is Scalable System in Distributed System?
In distributed systems, a scalable system refers to the ability of a networked architecture to handle increasing amounts of work or expand to accommodate growth without compromising performance or reliability. Scalability ensures that as demand growsâwhether in terms of user load, data volume, or tr
10 min read
Middleware in Distributed System
In distributed systems, middleware is a software component that provides services between two or more applications and can be used by them. Middleware can be thought of as an application that sits between two separate applications and provides service to both. In this article, we will see a role of
7 min read
Difference between Hardware and Middleware
Hardware and Middleware are both parts of a Computer. Hardware is the combination of physical components in a computer system that perform various tasks such as input, output, processing, and many more. Middleware is the part of software that is the communication medium between application and opera
4 min read
What is Groupware in Distributed System?
Groupware in distributed systems refers to software designed to support collaborative activities among geographically dispersed users, enhancing communication, coordination, and productivity across diverse and distributed environments.Groupware in Distributed SystemImportant Topics for Groupware in
6 min read
Difference between Parallel Computing and Distributed Computing
IntroductionParallel Computing and Distributed Computing are two important models of computing that have important roles in todayâs high-performance computing. Both are designed to perform a large number of calculations breaking down the processes into several parallel tasks; however, they differ in
5 min read
Difference between Loosely Coupled and Tightly Coupled Multiprocessor System
When it comes to multiprocessor system architecture, there is a very fine line between loosely coupled and tightly coupled systems, and this is why that difference is very important when choosing an architecture for a specific system. A multiprocessor system is a system in which there are two or mor
5 min read
Design Issues of Distributed System
Distributed systems are used in many real-world applications today, ranging from social media platforms to cloud storage services. They provide the ability to scale up resources as needed, ensure data is available even when a computer fails, and allow users to access services from anywhere. However,
8 min read
Introduction to Distributed Computing Environment (DCE)
The Benefits of Distributed Systems have been widely recognized. They are due to their ability to Scale, Reliability, Performance, Flexibility, Transparency, Resource-sharing, Geo-distribution, etc. In order to use the advantages of Distributed Systems, appropriate support and environment are needed
3 min read
Limitations of Distributed Systems
Distributed systems are essential for modern computing, providing scalability and resource sharing. However, they face limitations such as complexity in management, performance bottlenecks, consistency issues, and security vulnerabilities. Understanding these challenges is crucial for designing robu
8 min read
Various Failures in Distributed System
DSM implements distributed systems shared memory model in an exceedingly distributed system, that hasnât any physically shared memory. The shared model provides a virtual address space shared between any numbers of nodes. The DSM system hides the remote communication mechanism from the appliance aut
3 min read
Types of Operating Systems
Operating Systems can be categorized according to different criteria like whether an operating system is for mobile devices (examples Android and iOS) or desktop (examples Windows and Linux). Here, we are going to classify based on functionalities an operating system provides.8 Main Operating System
11 min read
Types of Distributed System
Pre-requisites: Distributed System A Distributed System is a Network of Machines that can exchange information with each other through Message-passing. It can be very useful as it helps in resource sharing. It enables computers to coordinate their activities and to share the resources of the system
8 min read
Centralized vs. Decentralized vs. Distributed Systems
Understanding the architecture of systems is crucial for designing efficient and effective solutions. Centralized, decentralized, and distributed systems each offer unique advantages and challenges. Centralized systems rely on a single point of control, providing simplicity but risking a single poin
8 min read
Three-Tier Client Server Architecture in Distributed System
The Three-Tier Client-Server Architecture divides systems into presentation, application, and data layers, increasing scalability, maintainability, and efficiency. By separating the concerns, this model optimizes resource management and allows for independent scaling and updates, making it a popular
7 min read
Communication in Distributed Systems
Remote Procedure Calls in Distributed System
What is Remote Procedural Call (RPC) Mechanism in Distributed System?
A remote Procedure Call (RPC) is a protocol in distributed systems that allows a client to execute functions on a remote server as if they were local. RPC simplifies network communication by abstracting the complexities, making it easier to develop and integrate distributed applications efficiently.
9 min read
Distributed System - Transparency of RPC
RPC is an effective mechanism for building client-server systems that are distributed. RPC enhances the power and ease of programming of the client/server computing concept. A transparent RPC is one in which programmers can not tell the difference between local and remote procedure calls. The most d
3 min read
Stub Generation in Distributed System
A stub is a piece of code that translates parameters sent between the client and server during a remote procedure call in distributed computing. An RPC's main purpose is to allow a local computer (client) to call procedures on another computer remotely (server) because the client and server utilize
3 min read
Marshalling in Distributed System
A Distributed system consists of numerous components located on different machines that communicate and coordinate operations to seem like a single system to the end-user.External Data Representation:Data structures are used to represent the information held in running applications. The information
9 min read
Server Management in Distributed System
Effective server management in distributed systems is crucial for ensuring performance, reliability, and scalability. This article explores strategies and best practices for managing servers across diverse environments, focusing on configuration, monitoring, and maintenance to optimize the operation
12 min read
Distributed System - Parameter Passing Semantics in RPC
A Distributed System is a Network of Machines that can exchange information with each other through Message-passing. It can be very useful as it helps in resource sharing. In this article, we will go through the various Parameter Passing Semantics in RPC in distributed Systems in detail. Parameter P
4 min read
Distributed System - Call Semantics in RPC
This article will go through the Call Semantics, its types, and the issues in RPC in distributed systems in detail. RPC has the same semantics as a local procedure call, the calling process calls the procedure, gives inputs to it, and then waits while it executes. When the procedure is finished, it
3 min read
Communication Protocols For RPCs
This article will go through the concept of Communication protocols for Remote Procedure Calls (RPCs) in Distributed Systems in detail. Communication Protocols for Remote Procedure Calls:The following are the communication protocols that are used: Request ProtocolRequest/Reply ProtocolThe Request/Re
5 min read
Client-Server Model
The Client-Server Model is a distributed application architecture that divides tasks or workloads between servers (providers of resources or services) and clients (requesters of those services). In this model, a client sends a request to a server for data, which is typically processed on the server
6 min read
Lightweight Remote Procedure Call in Distributed System
Lightweight Remote Procedure Call is a communication facility designed and optimized for cross-domain communications in microkernel operating systems. For achieving better performance than conventional RPC systems, LRPC uses the following four techniques: simple control transfer, simple data transfe
5 min read
Difference Between RMI and DCOM
In this article, we will see differences between Remote Method Invocation(RMI) and Distributed Component Object Model(DCOM). Before getting into the differences, let us first understand what each of them actually means. RMI applications offer two separate programs, a server, and a client. There are
2 min read
Difference between RPC and RMI
RPC stands for Remote Procedure Call which supports procedural programming. It's almost like an IPC mechanism wherever the software permits the processes to manage shared information Associated with an environment wherever completely different processes area unit death penalty on separate systems an
2 min read
Synchronization in Distributed System
Synchronization in Distributed Systems
Synchronization in distributed systems is crucial for ensuring consistency, coordination, and cooperation among distributed components. It addresses the challenges of maintaining data consistency, managing concurrent processes, and achieving coherent system behavior across different nodes in a netwo
11 min read
Logical Clock in Distributed System
In distributed systems, ensuring synchronized events across multiple nodes is crucial for consistency and reliability. Enter logical clocks, a fundamental concept that orchestrates event ordering without relying on physical time. By assigning logical timestamps to events, these clocks enable systems
10 min read
Lamport's Algorithm for Mutual Exclusion in Distributed System
Prerequisite: Mutual exclusion in distributed systems Lamport's Distributed Mutual Exclusion Algorithm is a permission based algorithm proposed by Lamport as an illustration of his synchronization scheme for distributed systems. In permission based timestamp is used to order critical section request
5 min read
Vector Clocks in Distributed Systems
Vector clocks are a basic idea in distributed systems to track the partial ordering of events and preserve causality across various nodes. Vector clocks, in contrast to conventional timestamps, offer a means of establishing the sequence of events even when there is no world clock, which makes them e
10 min read
Event Ordering in Distributed System
In this article, we will look at how we can analyze the ordering of events in a distributed system. As we know a distributed system is a collection of processes that are separated in space and which can communicate with each other only by exchanging messages this could be processed on separate compu
4 min read
Mutual exclusion in distributed system
Mutual exclusion is a concurrency control property which is introduced to prevent race conditions. It is the requirement that a process can not enter its critical section while another concurrent process is currently present or executing in its critical section i.e only one process is allowed to exe
5 min read
Performance Metrics For Mutual Exclusion Algorithm
Mutual exclusion is a program object that refers to the requirement of satisfying that no two concurrent processes are in a critical section at the same time. It is presented to intercept the race condition. If a current process is accessing the critical section then it prevents entering another con
4 min read
Cristian's Algorithm
Cristian's Algorithm is a clock synchronization algorithm is used to synchronize time with a time server by client processes. This algorithm works well with low-latency networks where Round Trip Time is short as compared to accuracy while redundancy-prone distributed systems/applications do not go h
8 min read
Berkeley's Algorithm
Berkeley's Algorithm is a clock synchronization technique used in distributed systems. The algorithm assumes that each machine node in the network either doesn't have an accurate time source or doesn't possess a UTC server.Algorithm 1) An individual node is chosen as the master node from a pool node
6 min read
Difference between Token based and Non-Token based Algorithms in Distributed System
A distributed system is a system in which components are situated in distinct places, these distinct places refer to networked computers which can easily communicate and coordinate their tasks by just exchanging asynchronous messages with each other. These components can communicate with each other
3 min read
RicartâAgrawala Algorithm in Mutual Exclusion in Distributed System
Prerequisite: Mutual exclusion in distributed systems RicartâAgrawala algorithm is an algorithm for mutual exclusion in a distributed system proposed by Glenn Ricart and Ashok Agrawala. This algorithm is an extension and optimization of Lamport's Distributed Mutual Exclusion Algorithm. Like Lamport'
3 min read
SuzukiâKasami Algorithm for Mutual Exclusion in Distributed System
Prerequisite: Mutual exclusion in distributed systems SuzukiâKasami algorithm is a token-based algorithm for achieving mutual exclusion in distributed systems.This is modification of RicartâAgrawala algorithm, a permission based (Non-token based) algorithm which uses REQUEST and REPLY messages to en
3 min read
Source Management and Process Management
Distributed File System and Distributed shared memory
What is DFS (Distributed File System)?
A Distributed File System (DFS) is a file system that is distributed on multiple file servers or multiple locations. It allows programs to access or store isolated files as they do with the local ones, allowing programmers to access files from any network or computer. In this article, we will discus
8 min read
Andrew File System
The Andrew File System (AFS) is a distributed file system that allows multiple computers to share files and data seamlessly. It was developed by Morris ET AL. in 1986 at Carnegie Mellon University in collaboration with IBM. AFS was designed to make it easier for people working on different computers
5 min read
File Service Architecture in Distributed System
File service architecture in distributed systems manages and provides access to files across multiple servers or locations. It ensures efficient storage, retrieval, and sharing of files while maintaining consistency, availability, and reliability. By using techniques like replication, caching, and l
12 min read
File Models in Distributed System
File Models in Distributed Systems" explores how data organization and access methods impact efficiency across networked nodes. This article examines structured and unstructured models, their performance implications, and the importance of scalability and security in modern distributed architectures
6 min read
File Accessing Models in Distributed System
In Distributed File Systems (DFS), multiple machines are used to provide the file systemâs facility. Different file system utilize different conceptual models of a file. The two most usually involved standards for file modeling are structure and modifiability. File models in view of these standards
4 min read
File Caching in Distributed File Systems
File caching enhances I/O performance because previously read files are kept in the main memory. Because the files are available locally, the network transfer is zeroed when requests for these files are repeated. Performance improvement of the file system is based on the locality of the file access
12 min read
What is Replication in Distributed System?
Replication in distributed systems involves creating duplicate copies of data or services across multiple nodes. This redundancy enhances system reliability, availability, and performance by ensuring continuous access to resources despite failures or increased demand.Replication in Distributed Syste
9 min read
Atomic Commit Protocol in Distributed System
In distributed systems, transactional consistency is guaranteed by the Atomic Commit Protocol. It coordinates two phasesâvoting and decisionâto ensure that a transaction is either fully committed or completely canceled on several nodes. Distributed TransactionsDistributed transaction refers to a tra
4 min read
Design Principles of Distributed File System
A distributed file system is a computer system that allows users to store and access data from multiple computers in a network. It is a way to share information between different computers and is used in data centers, corporate networks, and cloud computing. Despite their importance, the design of d
6 min read
What is Distributed Shared Memory and its Advantages?
Distributed shared memory can be achieved via both software and hardware. Hardware examples include cache coherence circuits and network interface controllers. In contrast, software DSM systems implemented at the library or language level are not transparent and developers usually have to program th
4 min read
Architecture of Distributed Shared Memory(DSM)
Distributed Shared Memory (DSM) implements the distributed systems shared memory model in a distributed system, that hasnât any physically shared memory. Shared model provides a virtual address area shared between any or all nodes. To beat the high forged of communication in distributed system. DSM
3 min read
Difference between Uniform Memory Access (UMA) and Non-uniform Memory Access (NUMA)
In computer architecture, and especially in Multiprocessors systems, memory access models play a critical role that determines performance, scalability, and generally, efficiency of the system. The two shared-memory models most frequently used are UMA and NUMA. This paper deals with these shared-mem
5 min read
Algorithm for implementing Distributed Shared Memory
Distributed shared memory(DSM) system is a resource management component of distributed operating system that implements shared memory model in distributed system which have no physically shared memory. The shared memory model provides a virtual address space which is shared by all nodes in a distri
3 min read
Consistency Model in Distributed System
It might be difficult to guarantee that all data copies in a distributed system stay consistent over several nodes. The guidelines for when and how data updates are displayed throughout the system are established by consistency models. Various approaches, including strict consistency or eventual con
6 min read
Distributed System - Thrashing in Distributed Shared Memory
In this article, we are going to understand Thrashing in a distributed system. But before that let us understand what a distributed system is and why thrashing occurs. In naive terms, a distributed system is a network of computers or devices which are at different places and linked together. Each on
4 min read
Distributed Scheduling and Deadlock
Scheduling and Load Balancing in Distributed System
In this article, we will go through the concept of scheduling and load balancing in distributed systems in detail. Scheduling in Distributed Systems:The techniques that are used for scheduling the processes in distributed systems are as follows: Task Assignment Approach: In the Task Assignment Appro
7 min read
Issues Related to Load Balancing in Distributed System
This article explores critical challenges and considerations in load balancing within distributed systems. Addressing factors like workload variability, network constraints, scalability needs, and algorithmic complexities are essential for optimizing performance and resource utilization across distr
6 min read
Components of Load Distributing Algorithm - Distributed Systems
In distributed systems, efficient load distribution is crucial for maintaining performance, reliability, and scalability. Load-distributing algorithms play a vital role in ensuring that workloads are evenly spread across available resources, preventing bottlenecks, and optimizing resource utilizatio
6 min read
Distributed System - Types of Distributed Deadlock
A Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource occupied by some other process. When this situation arises, it is known as Deadlock. DeadlockA Distributed System is a Network of Machines that can exchange info
4 min read
Deadlock Detection in Distributed Systems
Prerequisite - Deadlock Introduction, deadlock detection In the centralized approach of deadlock detection, two techniques are used namely: Completely centralized algorithm and Ho Ramamurthy algorithm (One phase and Two-phase). Completely Centralized Algorithm - In a network of n sites, one site is
2 min read
Conditions for Deadlock in Distributed System
This article will go through the concept of conditions for deadlock in distributed systems. Deadlock refers to the state when two processes compete for the same resource and end up locking the resource by one of the processes and the other one is prevented from acquiring that resource. Consider the
7 min read
Deadlock Handling Strategies in Distributed System
Deadlocks in distributed systems can severely disrupt operations by halting processes that are waiting for resources held by each other. Effective handling strategiesâdetection, prevention, avoidance, and recoveryâare essential for maintaining system performance and reliability. This article explore
11 min read
Deadlock Prevention Policies in Distributed System
A Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for a resource that is held by some other process. There are four necessary conditions for a Deadlock to happen which are: Mutual Exclusion: There is at least one resource that is no
4 min read
Chandy-Misra-Haas's Distributed Deadlock Detection Algorithm
Chandy-Misra-Haas's distributed deadlock detection algorithm is an edge chasing algorithm to detect deadlock in distributed systems. In edge chasing algorithm, a special message called probe is used in deadlock detection. A probe is a triplet (i, j, k) which denotes that process Pi has initiated the
4 min read
Security in Distributed System
Security in Distributed System
Securing distributed systems is crucial for ensuring data integrity, confidentiality, and availability across interconnected networks. Key measures include implementing strong authentication mechanisms, like multi-factor authentication (MFA), and robust authorization controls such as role-based acce
9 min read
Types of Cyber Attacks
Cyber Security is a procedure and strategy associated with ensuring the safety of sensitive information, PC frameworks, systems, and programming applications from digital assaults. Cyber assaults is general phrasing that covers an enormous number of themes, however, some of the common types of assau
10 min read
Cryptography and its Types
Cryptography is a technique of securing information and communications using codes to ensure confidentiality, integrity and authentication. Thus, preventing unauthorized access to information. The prefix "crypt" means "hidden" and the suffix "graphy" means "writing". In Cryptography, the techniques
8 min read
Implementation of Access Matrix in Distributed OS
As earlier discussed access matrix is likely to be very sparse and takes up a large chunk of memory. Therefore direct implementation of access matrix for access control is storage inefficient. The inefficiency can be removed by decomposing the access matrix into rows or columns.Rows can be collapsed
5 min read
Digital Signatures and Certificates
Digital signatures and certificates are two key technologies that play an important role in ensuring the security and authenticity of online activities. They are essential for activities such as online banking, secure email communication, software distribution, and electronic document signing. By pr
11 min read
Design Principles of Security in Distributed System
Design Principles of Security in Distributed Systems explores essential strategies to safeguard data integrity, confidentiality, and availability across interconnected nodes. This article addresses the complexities and critical considerations for implementing robust security measures in distributed
11 min read
Distributed Multimedia and Database System
Distributed Database System
A distributed database is basically a database that is not limited to one system, it is spread over different sites, i.e, on multiple computers or over a network of computers. A distributed database system is located on various sites that don't share physical components. This may be required when a
5 min read
Functions of Distributed Database System
Distributed database systems play an important role in modern data management by distributing data across multiple nodes. This article explores their functions, including data distribution, replication, query processing, and security, highlighting how these systems optimize performance, ensure avail
10 min read
Multimedia Database
A Multimedia database is a collection of interrelated multimedia data that includes text, graphics (sketches, drawings), images, animations, video, audio etc and have vast amounts of multisource multimedia data. The framework that manages different types of multimedia data which can be stored, deliv
5 min read